Achieving future-proof software

As IT engineers we take great pride when what we do improves an aspect of society. Even the little things, like improving loading time by half a second mean a lot. Having said that, just imagine the feeling when we launch an app after 9 months of hard work and dedication and people start using it - these achievements are our fuel. Forget the ‘I turn coffee into code’ and the likes, this is the real adrenaline rush, the real motivator. Giving birth to something, leaving a mark, creating something that will be there to serve its purpose. But will it? Even if all tests pass, all scenarios are well thought out, everything is perfectly built, and everyone is happy with the solution - one key question lurks in the back of everyone’s minds: Will it last? Or rather, how long will it last?

future-proof-software-1

Can software architecture be future-proofed as building architecture? 

Thinking about man-made things that last - construction engineers should know best, they make things that last decades, generations, even millennia. We all know about the Golden Gate Bridge, the Three Gorges Dam, and the Burj Khalifa. We’ve learned in school about the Great Wall and the Great Pyramid of Giza. Engineering marvels tirelessly serve their purpose while also standing silently as a monument of the genius minds that brought them to life. However, there is one thing even more astonishing than their greatness - and that is the notion that they don’t seem to have a problem with time. Seems like construction engineers have had the capability to predict the future and build for it. Sure, there were some scary failures, like buildings devastated by earthquakes, or bridges collapsing due to resonance, but even these occurrences were well documented and led to new building standards disseminated to engineers around the world making sure that mistakes made in the past remain there.

One has to (and one often does) compare building software with the construction of a building. There are just too many similar concepts to turn a blind eye to and ignore the urge for a perfect blueprint, a standardized well known set of materials, clear roles on the construction site, and a strict timeline. But one key difference between the construction of a building and working software is predictability.

We all know the basics of how a building looks (well at least normal ones), how it’s constructed, connected to the world and how it’s used. And these concepts change and evolve too, but at a much slower pace. Software, on the other hand, has a special power to surprise us time and time again, with new tools and frameworks coming and going so fast we actually need additional software just to track the news.

This is normal, it’s a part of evolution and progress, but we have to accept the impact of these changes on predictability. To increase the entropy even further, businesses built around software are evolving at a similar or higher pace. Not to forget, the user’s needs drastically change in just a few years. Don’t you agree?

Just go back a bit in time and try to remember if you had any clue that books will be delivered to your home with a drone while you are making your morning coffee from your phone, that you can then work from a robot vacuumed home and have a meeting with 200 colleagues, and then get some rest after work on a smart couch and stream more content than the entire generation of your parents ever saw. Maybe even stumble upon an ad, book a fully organized trip to the other side of the world with a couple of clicks, and pay for all that with instant and secure transactions. This is just one ‘normal’ day. So all three topics (emerging tech, evolving businesses, and user needs) point to the inevitable conclusion that the environment in IT is changing at a rate where even trying to predict the smallest change looks like a big gamble.

 

What does it take to build future-proof software?

So, what can we do to future-proof our software? How do we cope with the changing landscape and build buildings that will hold on for generations to come? How do we design for the unexpected? How do we build our future-proof software? 

We don’t. We build for what we know, extend, and generalize for what we can expect with a higher degree of certainty, but most important of all - we make sure we are able to adapt and evolve and do that fast! There are a lot of good practices, principles, methodologies, and examples out there, and for sure - we follow them. 

 

5 factors that can help you future-proof your software

In this article, we will look at a narrowed-down list of the most important concepts that when done right, drastically enhance the longevity of software and can help you build future-proof software solutions. 

 

1. Resilient Architecture Embracing Change

Decoupled smaller parts tend to survive and adapt much better than large ones (think about the dinosaurs and the tardigrade). Smaller parts are always easier to understand, less complex, have a single responsibility, and are easier to change without disrupting the whole system. A well thought of architecture should provide capabilities for adapting to environmental changes, it should react to situations where we have more or less traffic, and it should continue to work even when the most unlikely scenarios occur. 

A great example is Netflix, specifically chaos engineering where engineers constantly push the limit and do crazy things like shutting down a server just to see what happens. The trick is to expect even the most unlikely scenarios and then test your system under those circumstances - in production. If we have built for resilience and redundancy, we shouldn’t be scared to turn off a feature or a service. 

Key takeaways:

  • Ability to change isolated parts without breaking anything
  • Deploy frequently with no downtime
  • Responsiveness to changing workloads with queues and autoscaling
  • Ability to turn features on or off
  • Making live configuration changes
  • Offer at least the basic service when everything else fails

2. Cloud & IaC

This topic has become a rather obvious one, but has to be mentioned here - the flexibility offered on the cloud is unrivaled and crucial for any future plans. By using cloud services businesses can adapt to changing environments without taking unnecessary risks and investments. Scaling is natural and goes together with the success of the product. High availability and responding to security threats are now out-of-the-box solutions. Scripting your infrastructure is now a standard - but it wasn’t on the radar a few years ago. With a scripted infrastructure we can even push the limit and make it independent from a specific cloud provider, given that we are not using proprietary services, making our software solution even more future-proof.

Key takeaways: 

  • Load balancing, auto scaling
  • Serverless computing
  • Scripted and versioned infrastructure
  • High availability
  • Security

 

3. Automation & Maintainability

Maintainability in software is often the key differentiator between success and failure. Once a product hits the market and gets traction, more and more features will be required, bugs will be introduced and problems will arise, and there will be less and less time to address them while we are still getting more feature requests to respond to business needs.

At this point we are fighting against time - if we are slower than the moving horizon we will end up just fixing bugs and fighting to escape the darkness. It is a known fact that software supported by a higher level of automation has way greater chances of surviving. I think these two topics should always go together. You can’t maintain something with a low degree of automation in the long run. Automation buys us the time to think and frees us from repetitive work.

With fast and automated pipelines running all the needed builds, deployments, and tests, we are able to release whenever needed without being scared something will break. 

Not after 2 weeks, not after 2 days - now. Automation is also a part of our development process, represented by linting and code analysis, automated database migrations, various scaffolding tools, and intelligent tools helping developers. 

Reducing the footprint of the apps we build is a practice we like to follow. The decision to use external libraries, services, and tools must be made only if they have a really good chance of being maintained and updated regularly, otherwise, we are just increasing the risk of the software becoming legacy, becoming stuck at a certain version just because someone else decided to stop supporting something. 

Proper monitoring and alerting are the eyes and ears of the team. With them in place, crucial minutes or even hours are saved, and we are able to put out fires at the first sign of smoke. Anything that saves time actually buys us time to think, to focus - and that is the most valuable time.

Key takeaways:

  • Automated tests
  • Automated deployments
  • Well-documented and maintained dependencies
  • Linting and code analysis
  • Monitoring and alerts

 

4. Machine learning (ML) & Artificial intelligence (AI)

The ability to adapt to environmental changes requires making decisions fast, which in turn requires knowledge. There are scenarios where relying on people to always make the right decisions and do that fast under all circumstances with limited knowledge is not the most effective choice and the best bet regarding future proofing your application. Adopting machine learning gives us a clear advantage in the future-proof category, as the software now learns from the environment and adapts to it with no changes in the code at all. There are limitations as to where machine learning can be implemented and replace static algorithms, but there is also a vast area for research and innovation. An especially interesting area we are working on where ML excels is making predictions with a certain accuracy for the future outcome of events, like will a booking be canceled or will an insured patient develop a certain condition. These statistics are then used to optimize workflows or to provide a better service to end users.

Key takeaways:

  • Continuous learning and adapting software with little to no interventions
  • Fast and accurate predictions

 

5. Organization, Knowledge & Standards

Small self-organizing squads of approximately 3 people with a clear mission are the cells, the building blocks of our organization. These blocks are then laid on top of the foundations, represented by the 5 pillars of IBORN we call cultures: UI/UX, Dev, DevOps, QA and BI/ML. Having a small team with a clear focus supported by the 5 cultures drastically increases productivity and promotes critical thinking. 

Our usual process in a few words: we start on a whiteboard, from where our design team jumps in and helps us visualize and optimize the solution. The best phase for changes is here, in the part of the process where it’s cheap to make mistakes, play around, ideate, think, and iterate. From here on we draw the architecture, do workshops for the behavior-driven specification, and compile the testing strategy. This gives us a foundation to start developing, while the DevOps team helps in preparing the pipelines and infrastructure. 

The knowledge and experience of the team that stands behind every application we build are closely related to the success and everyday normal operation of the software. Ensuring that every member of the team has proper onboarding, training, and time to learn the business case and improve in the tech stack has an undeniable impact on the quality and speed of development and on the confidence of the team to make the right decisions and react to changes. We are also dedicated to structuring and standardizing the team knowledge by contribution from all members, as this promotes knowledge sharing inside the team but also outside of it. Dedicating time for research and experimenting creates a mindset that embraces challenges and is ready to take the leap forward when needed.

Key takeaways:

  • Small self-organizing focused teams supported by our 5 pillars
  • Invest in building knowledge
  • Collaborate on team standards
  • Dedicate time for research and experimentation

 

Is future-proof software attainable? 

The conclusion is that building future-proof software is a demanding activity. It is definitely something that can be accomplished, but maybe not in the truest meaning of the word. Trying to predict the direction of IT, businesses built on top of IT, and user’s needs is getting harder and harder to do, and sometimes people tend to take this future-proofing goal too far.

Building for that imaginary future may lead to adding unnecessary complexity to already complex systems. The YAGNI principle already says it all. Instead, investing in a state where the software is supported by a high level of automation and in architecture that enables fast changes and reacts to them has a way higher return on investment.

Doing everything in your power to build the knowledge, increase self-confidence and increase the potential of the team will guarantee that any future challenge will be met by an army of engineers eagerly waiting for a challenge.

on August 11, 2022