The journey from theory to practice in software engineering - Part 1

As a third-year student starting my first job, I felt overwhelmed by the project's complexity. The main repository alone comprised around 80 different assemblies, a codebase far beyond anything I encountered at a university. There, our focus was primarily on theoretical aspects like data structures, graph algorithms and low-level operating systems procedures. Yet, in my subsequent web development roles, I rarely, if ever, needed to implement such specific algorithms. This led me to question the practicality of my study program. Were my studies too theoretical for the everyday demands of software development?

Programmer journey from theory to practice

In this blog series, I’ll explore how learning computer science can provide a solid basis for your software engineering path. I’ll dive into things that you’ll rarely learn at a university and that are usually ingested when you start your career. I will examine how the theory melts with practice and how it can help us to do our jobs better. Lastly, I’ll debate the right balance of theoretical skills with applied knowledge and what you should know before your first job.

Let’s start with the foundations of software engineering.

Understanding the theory of computer science

Firstly, let me define what I mean by “the theory”. These are the concepts that we learn at the university, which form the foundations of software engineering. Although we refer to it as a “theory”, this knowledge may still be very practical in specific applications like the design of operating systems, DBMS, compilers, cryptography or NP-completeness. All that said, we still call it “theory” because most companies that build modern software apps base their solutions on top of these components, not having to deal with them themselves.

Programmer journey from theory to practice

If we don’t usually have to deal with it, how is knowledge of computer science beneficial in our jobs?

Algorithmic thinking

One key aspect is that algorithmic thinking is hard and very unintuitive at the beginning. A certain level of algorithmic thinking is always required in a programmer's job and you want to make it easy, so you can focus on more important things. Developing this skill thoroughly will make it effortless later on. To illustrate it, imagine you’re lifting heavy weights at the gym to easily carry your shopping bags later. Your shopping bags won’t weigh 100kg (unless you’re preparing a BIG party), but carrying them will be smooth and simple. You won’t even notice that you are bearing them. You may easily focus on a conversation with your friend as opposed to sweating in a struggle to bring them to your home.

Computation complexity

Additionally, you get a good grasp of data structures and algorithms' computational complexity. You have an intuitive understanding that the HashSet.Contains() operation is O(1), whereas Array.Contains() will be O(n), but Find(x => x.Prop == propValue), will be O(n) on both. You know where to focus your optimisation efforts. If you use a custom string search algorithm, you know that it’s possible to do it in a linear time. However, if you recognise some form of travelling salesman problem, you may want to consider searching for a heuristic solution. Going back to the gym analogy, you know which muscles you should utilise if the weight is too heavy. Using your back rather than only your biceps can significantly improve your capabilities. 

Broader view

You also get a broader view of different aspects of software engineering. A good example would be cybersecurity awareness. Modern tools usually prevent foolish mistakes, but SQL injection and XSS are still a thing. If you didn’t play with it before, it’s hard to recognise these issues at first glance. I wholeheartedly recommend taking a cybersecurity course at your uni, it’s great fun to find and exploit such flaws (that’s one of the reasons why people do it :)). You didn’t know you’d ever use that muscle before, but now? Wow! What would have happened if you hadn’t trained it before? 

As we navigate through the complexities of the business project, it becomes apparent that the challenges we face extend far beyond what is taught at a university. Now, let's examine more hands-on, practical aspects of software development.

Practical software development

In transitioning from academia to the professional world, many new graduates face unexpected difficulties. Let’s explore what might be missing from the Computer Science curriculum. Picture yourself freshly graduated and stepping into your first software development role. What is new? What is unexpected? What is hard? From making sense of legacy code to managing big-scale data, we'll discover the surprises that can catch recent graduates off guard.


One thing that will be challenging is bug analysis. At the university, you’re usually making something new. You can run all your code locally and you’re the main user. You ‘report’ and fix the bugs.

Programmer journey from theory to practice

Now, this is drastically different in your job. You’re given a huge, old codebase written many years ago. It takes time to find your way around it. When you’re given a bug to resolve, it may sometimes be the case that you can’t easily contact the original reporter, and you can’t reproduce the issue locally. What you’re left with in this position is usually drilling through the logs or adding extra logs to be drilled through later. It takes a very thorough understanding of how the code works to resolve such bugs. You need to have the ability to pose and verify plausible hypotheses about the issue. Challenging correct assumptions also requires a certain level of intuition and knowledge. You usually start with the recent changes, but sometimes it turns out that this particular feature was broken for years. Once in a while, the fault is not even in your codebase but in the library’s or base Docker image.

Let me share a droll example from my early career. At the time, we were dealing with a bug that was only reported by just one user, but she was very vocal about it, as the issue interfered with her daily responsibilities. We struggled to replicate the problem and faced internal bureaucratic delays in contacting her directly. The issue remained unresolved for over a year, and she changed jobs before we could find a solution. Interestingly, we never received another report about it. The problem disappeared.

Debugging complex issues can be pretty overwhelming at first and it’s not something you could easily learn at college. A course dedicated to 'Bug Analysis,' where students tackle real bug reports in large-scale applications, could be an interesting addition to software engineering programs.


Your assignments at the university will be much different from the ones you get at your company. During studies, projects usually have a very specific set of requirements. If you pass them all, you get the highest grade. On the other side, at your job, you get a task and you’re responsible for figuring out what to do to make it work. 

Programmer journey from theory to practice

It will be challenging to figure out how the current codebase works and what you should do to make the change. What will be even harder, is navigating the ambiguity of the task itself. Often, the business might not have a precise idea of what it wants, not in the detail necessary for us to develop working code that handles all edge cases. It will usually be your responsibility to confirm the requirements. You don’t want to spend a week crafting a solution, you will need to later throw away. Of course, it’s not always possible to plan everything, sometimes even with the best effort, you only discover problems when you start dealing with the code.

Learning to disregard your assumptions, particularly in areas outside your expertise, becomes a valuable skill. Your “improvement” may not be what users want or may not be worth the effort. It may be too risky to change this part of the codebase or require too much testing. There are so many more variables than in a simple one-person project.

One noteworthy instance in my history was a small code quality improvement I implemented while adding a new feature. The improvement wasn't essential for the feature, but I believed it would enhance the overall code. Unfortunately, this seemingly minor change led to the application crashing. Despite three attempts to fix it, issues persisted. The new feature itself was fine; the problem lay only with the added improvement. Ultimately I decided to revert it completely. This was a valuable learning experience in identifying which changes are truly required and which might cause unnecessary work. Sticking to what was truly required for the feature would have been the right call. This is an example of what you would call “scope creep”, where change boundaries extend beyond their original scope, often complicating the work unnecessarily. Since then I have tried to examine it more thoroughly in my changes.

Handling real-world data

In one-off projects, you usually don’t care about the amount of data that your application produces. The volume is so low that any performance or storage problems are non-existent. You usually have a single user, and the data comes only from you, playing with the app.If something breaks, you can usually manually fix the records in the database.

(Disclaimer: I talk about programming projects here. One of my colleagues in Data Science program had to use a 700-CPU cluster to process data for his master's thesis)

Programmer journey from theory to practice

In the real world, there are a variety of new problems that you need to consider once the data volume starts growing. A few examples include:

  • Validation: Whatever you put into the database, your application has to deal with later. Editing millions of records by hand would be difficult, error-prone, and time-consuming. Even if you clean up your data with one-off scripts, this is an extra effort you need to put into writing them. The effort that could have been spent on writing features or paying down technical debt, if only you had validated or transformed it before saving it.

  • Scalability: Once you reach a certain volume of data, you need to continuously examine the changes you make in terms of performance. It's hard to verify such things in every single PR, but it will become more and more important as your database grows in complexity. Examples include:
    • Can I add this extra “join” or will it be too much for this query? 
    • Maybe I should process this asynchronously, it takes 10 minutes for a single record!
    • This table has 2 billion rows now, maybe should we start thinking about database sharding.
    • We plan to update 10 indexes in the next release to optimise key lookups.
  • Lifecycle Management: You will want to implement some processes that allow you to archive and/or delete old data. Otherwise, your tables will keep growing and growing, filled with information that you don’t need anymore. This will necessitate some kind of background jobs, which is yet another thing to maintain.

As the project increases in complexity there are more and more things to maintain that are not directly related to feature development. This may be an unpleasant surprise for a new graduate, who previously developed a ‘whole’ project by herself in a few months.


This concludes Part 1, where we explored the two faces of software engineering, from foundations to everyday reality. We started with the core aspects like algorithmic thinking and computational complexity, then moved to practical skills such as debugging, navigating the business domain and handling real-world data. We’ve highlighted the significance of both theoretical and practical knowledge in shaping a well-rounded software engineer.

Stay tuned for Part 2, where we will look at how we can combine theory with practice and find the right balance.

In the meantime, I invite you to connect with me on LinkedIn or email me. I am waiting to hear about your journey from theory to software engineering practice.

Share the happiness :)