Year-End Discount: 10% OFF 1-year and 20% OFF 2-year subscriptions!

Home/Blog/Computer science fundamentals: 3 essential areas for developers

Computer science fundamentals: 3 essential areas for developers

Jun 29, 2022 - 12 min read
Andrew Zahler
editor-page-cover

Computer science is a vast field with multiple branches. But whether you work in data science, computer networks, cryptography, web development, or another area, you need a strong foundation in computer science fundamentals to achieve your goals.

Wherever you are on your programming journey, knowing the fundamentals will help you become a more informed and effective developer. For example, you may learn how to develop an algorithm using expertise in your area of interest. But you’ll still need to be comfortable with computational thinking and other foundational topics in order to:

  • Compute the algorithm’s time and space complexity
  • Make the best use of available data structures
  • Prove the algorithm’s correctness

You may also need to decide which programming language is best suited for your tasks. To do so, you should understand:

  • When a programming language should be high-level or low-level
  • When to prefer a programming language with a compiler vs an interpreter

We’ll explain these concepts later in this article, but suffice to say you simply can’t solve such problems without grasping the fundamentals of computer science. Today, we’ll introduce you to three areas of CS fundamentals at a high level and suggest further reading in each area. From theoretical computer science work to software engineering or software development, knowing these fundamentals will prove invaluable, whether you have a CS degree or not. Let’s get started!

We’ll cover:


Learn to code today.

Try one of our 300+ courses and learning paths: Introduction to Computers and Programming.


1. Hardware and software fundamentals

Let’s start at a foundational level: the machines you program on, and the programs they run. Computer architecture refers to a science or a collection of rules stating how software and hardware are joined and interact to make a computer function. That definition introduces two core ideas: hardware and software. Hardware is anything physically connected to a computer. For example, your display monitor, printer, mouse, and hard drive are all hardware components. Compare this to software: a collection of programs and procedures that perform tasks on a computer. Software is an ordered sequence of instructions that change the state of a computer’s hardware.

widget

What to learn first about hardware

Some topics in computer hardware to know about include:


Hardware components

  • Central Processing Unit (CPU): Processes information on a computer. It’s a physical object that takes data from the main memory, processes it, and returns the updated data to the main memory.
  • Control unit (CU): A subunit of the CPU that controls data flow from and into the main memory.
  • Arithmetic and logic unit (ALU): Another subunit of the CPU that is responsible for processing the arithmetic and logic operations.
  • Input units: Take data from the world or an input device and convert it into streams of bytes. Examples: keyboard, mouse, microphone, camera, and USB.
  • Output units: Take processed data from the CPU and render it in a human-understandable way. Examples: monitor screens, printers, and headphones.
  • Storage units: Where data is stored after being retrieved and processed. The storage unit, or memory, is the physical memory space.
  • Memory: Includes both the main memory or random access memory (RAM), which are physical memory spaces in the computer, and secondary storage, like hard drives, CDs, USB flash drives, etc.

Hardware architectures

  • Von Neumann architecture: A 1945 design by John von Neumann, still used in most computers produced today, in which program instructions and data share the same memory and pathways.
  • Harvard architecture: A computer architecture in which the storage and signal pathways for data and instructions are separated, in contrast to the von Neumann architecture.
  • Instruction set architecture (ISA): An abstract model of a computer. An implementation is a device that executes instructions specified by an ISA. Generally, an ISA specifies the following for a family of implementations:
    • Instructions
    • Data types
    • Registers
    • Hardware support for managing main memory
    • Fundamental features
    • Input/output model
widget

What to learn first about software

Some topics in software to know about include:


Types of programming languages

  • Machine language: The only language that the computer can process is a stream of ones and zeros called binary. Machine language is considered a low-level programming language.
  • Assembly language: A low-level programming language readable by humans that translates binary into assembly instruction, which must be translated into machine language for the computer. Assembly languages are a bridge between machine language and high-level programming languages.
  • High-level languages: Also known as programming languages (e.g. Python, C++, Java). These languages allow the creation of powerful, complex, human-readable programs without large numbers of low-level instructions (i.e. assembly language instructions).

Key software types

  • Assembler: A utility program that translates an assembly language program into machine language.
  • Compiler: Primarily a program that translates source code written in a high-level programming language into machine-readable target code in a lower-level language, such as machine language or assembly language. Once the translation is complete, the target code is passed to the target machine for execution.
  • Interpreter: A program that translates source code written in a high-level programming language into machine-readable target code in a lower-level language piece by piece while the source code is being executed.
  • Operating system: Software that supports a computer’s basic functions, manages computer hardware and software resources, and provides common services for computer programs.
  • User applications: Software that’s typically written for the end-user that’s designed to carry out a specific task other than one related to the operation of the computer system. Today, such applications may take the form of standalone applications, web-based applications, and mobile applications.

Go deeper: Learn more about concepts related to hardware and software and compilers vs interpreters.

widget
Download the CS Fundamentals cheat sheet (PDF)

2. Data structures and their properties

Our next area is data structures. Data structures are formats for the organization, management, and storage of data that enable efficient access and modification. As we’ll discuss in our third section, you apply algorithms to data structures for problem-solving.

widget

What to learn first about data structures

Some data structure topics to know about include:

  • Array: A collection of items of the same variable type that are stored sequentially in memory. Each item in an array is indexed starting with 0, and each item is known as an element. Arrays are best suited to retrieve data in a constant time (using index) but do not provide fast insertion or deletion of data. Read more about arrays.
  • Linked list: A linear sequence of nodes that are linked together. In a singly linked list, each node contains a value and a pointer to the next node in the list. Unlike arrays, singly linked lists do not have indexes, so you must start at the first node and traverse through each node until you get to the nth node. Link lists provide faster deletion and insertion but slower data retrieval compared to arrays. Read more about linked lists.
  • Tree: A non-linear data structure often used to represent hierarchical data. For example, a hierarchical company structure uses a tree to organize. Read more about trees.
  • Stack: A linear structure last-in, first-out (LIFO) order. It might help to imagine a stack of plates. The last plate that you put on top of the stack is the first one you take out. Stacks work that way. Read more about stacks.
  • Queue: Similar to a stack in that they both are linear data structures with dynamic size. However, queues are first-in, first-out (FIFO) data structures. Imagine you are lining up for a roller coaster. The first people that line up can leave the line for the ride first. Read more about queues.
  • Graph: An abstract notation that represents the connection between all pairs of objects. Read more about graphs.
  • Hash table: Depends on the process of hashing, or assigning an object into a unique index, known as a key. Each object is identified using a key-value pair, and the collection of objects is known as a dictionary. A hash table is implemented by storing elements in an array and identifying them through a key. A hash function takes in a key and returns an index for which the value is stored. Read more about hash tables.
  • Heap: An advanced tree-based data structure used primarily for sorting and implementing priority queues. Read more about heaps.

3. Algorithms: Complexity and design

To a computer scientist, an algorithm is a series of well-defined instructions that tell a computer what to do to solve a problem. As mentioned above, algorithms are applied to various data structures, and they’re a favorite topic for coding interviews.

widget

What to learn first about algorithms

Some algorithm topics to know about include:


Time complexity and correctness

  • Asymptotic time complexity: An analysis that computes the exact running time of an algorithm and is platform- and input-independent. Such a time complexity analysis tells us how a program performs as the size of input grows regardless of the underlying machine. We use Big O to represent the upper bound, Big Omega (Ω\Omega) to represent the lower bound, and Big Theta (Θ\Theta) to represent the tight bound of running time. Asymptotic time complexity is generally preferred over an analysis based on a specific input and particular platform.
  • Time complexity of recursive algorithms: Computing asymptotic time complexity of iterative algorithms is straightforward. To compute the time complexity of recursive algorithms we can use either the substitution method, Master’s theorem, or recursion tree. Among them, the substitution method is considered the most rigorous as it’s based on mathematical induction.
  • Asymptotic space complexity: An analysis of how much memory an algorithm takes. The same asymptotic notations (Big O, Big Omega, and Big Theta) are also used to represent the space complexity of an algorithm.
  • Correctness proof techniques: Approaches used to prove that a given algorithm is correct and will always produce the intended output. One example would be proving that a sorting algorithm will always sort a list, regardless of the data in the list. The most common and widely used correctness technique is called “loop invariant,” which is based on mathematical induction.

Algorithm design techniques

  • Brute force: A method that requires going through all possibilities to find a solution to the problem being attempted. Often this algorithm comes to mind first. It’s also the least efficient and thus mostly does not give us the desired solution in a feasible time. For instance, to crack a reasonable password using brute force may take a few hundred years.
  • Divide and conquer: A pattern that breaks a problem into smaller subtasks that are then solved using recursion and eventually reassembled. Recursion is the practice in which a function calls itself directly or indirectly. Examples of divide and conquer algorithms include merge sort and quicksort.
  • Dynamic programming: A pattern similar to divide and conquer. We divide a big problem into small subtasks and combine their solutions. However, a key difference is that a subtask may overlap with other subtasks. Thus, to reduce running time we save the results of each subtask in memory, called memoization. Memoization ensures that each subtask is performed only once. Whenever a subtask is needed again, its result is retrieved instantly from memory.
  • Greedy: An approach in which we try to solve each subtask using the best possible local solution available, called local optima. A greedy algorithm yields optimal results only when local optima leads us to the global optima, the best possible global solution. Examples of greedy algorithms are Prim’s algorithm, which finds a minimum spanning tree, and the Dijkstra algorithm, which finds the shortest path in a graph.
  • Other design techniques: Approximation algorithms find a near-optimal solution when finding an optimal solution is either time-consuming or not feasible. Randomized algorithms and linear programming are other frequently used algorithm design techniques.

Key algorithm categories

  • Sorting and searching: Algorithms that put the elements of a list into order, or check for or retrieve an element from any data structure where it’s stored. Sorting examples include mergesort, quicksort, bubble sort, selection sort, and insertion sort. Searching examples include linear search and binary search.
  • Graph: Algorithms that are used to solve problems of representing graphs as networks (e.g. airline flights, how the Internet is connected, social network connectivity). As defined earlier, a graph is an abstract notation that represents the connection between all pairs of objects.
  • Shortest path: Algorithms that are used to find the shortest path in a graph, which is a fundamental problem used in different areas of computer science. Many sorting algorithms exist, each having its own advantages and disadvantages. An algorithm is selected based on the type of data, its size, and the user application.

Go deeper: Learn more about Big O Notation, time complexity, and space complexity; top algorithms you should know; recursion; and graph algorithms.


Learn to code today.

Try one of our 300+ courses and learning paths: Introduction to Computers and Programming.


Take your learning journey further

To develop your computer science expertise as a programmer, you’ll want to be confident with the terms under each area we’ve discussed today. Learners of all levels can dive deeper into these fundamentals with one of the interactive computer science courses we’ve created (there are no prerequisites):

Happy learning!


Continue learning about computers and programming


WRITTEN BYAndrew Zahler

Join a community of more than 1.6 million readers. A free, bi-monthly email with a roundup of Educative's top articles and coding tips.