If an organisation is to scale, it needs a data and cloud related talent strategy. A bold statement, I know, so let us look into its main points one by one first.
Boston Consulting Group reports that “[o]nly about 30% of companies navigate a digital transformation successfully.” Additionally, in their article “Lack of Skills Threatens Digital Transformation,” Gartner quote a TalentNeuron study that 53% of respondents have identified “the inability to identify needed skills was the No. 1 impediment to workforce transformation”.
When it comes to data and cloud specifically, these are future skills of the highest demand in the future also reflected by multiple reports, such as Randsdad’s Digital Skills: Unlock Opportunities for All and Gartner’s Critical Digital Skills to Accelerate Digital Transformation.
The talent development direction that the tech world has taken is quite clear and I am not going to go into the whys of it. Instead, I would like to discuss how organisations can build a future-proof tech workforce strategy. And it all starts by addressing the digital skills gap.
Measuring the cloud industry
Many organisations try to measure the cloud industry boom through estimations and forecasts about the data volume generated globally. We should always take such reports with a pinch of salt because of their measuring approach which tends to be more of a philosophy than actual measurement.
The International Data Corporation (IDC) even goes as far as talking about a “digital universe” and offering a metaphor on how universes expand. Their reports state that “data is doubling every two years” and the “global data volume will reach 175 zettabytes by 2025”.
Indeed, 175 ZB is an unfathomably large volume and although it is hard to imagine what we could do with that much information.
Yet, this certainly impacts the market. To begin with, the more data, the more needs and expectations from the business. More data generates more ideas and use cases. It then generates innovations and improvements on the product level, further increasing the competition on the organisational level.
Finally, it forms and transforms into industry trends and standards, constant dynamics and waves. As with other aspects of life, the only constant in the recent software engineering market is change.
How do you prepare for change?
The good news is that we do not have to reinvent the wheel; we can just listen to our good old “influencers” on how to deal with change. Kent Back did the homework for us when he summarised the most important lessons in 2004.
In his book Extreme Programming Explained: Embrace Change, he suggests that the software engineering essentials are values, principles and practices.
I have always imagined the way these three relate to one another and to change as the structure of a planet:
At the core, we see the values which are the least exposed to the corrosion that changes may cause. The nearer you get to the surface, the higher the mutability with technologies being prone to change the most.
Oddly, when we think about expertise, seniority or hiring, we are often biassed towards tech. However, this is the most volatile component and the one that is the easiest to learn and adapt to.
A quick glance to compare the technology landscape between 2012 and 2023 shows that the entire tech industry has changed dramatically.
Companies whose talent development strategy has focused entirely on a specific tech stack have suffered immensely as many of the tools have become obsolete in a decade. Moreover, the industry has been aflood with mergers and acquisitions, which often affect support and necessitate migrations.
Surely, a migration should not cause a company to sweep their entire engineering team. And it would not if the talent development strategy focused on the right components, viz. the ones at the core.
Practices
Values and principles are the compass that helps teams to navigate uncertainty and enables sustainable decision-making.
But if they are so important, how do we select them?
Engineering values and principles should overlap with company values or derive from them. While they are essential, they are also a vast topic that we are going to cover in a separate article.
When it comes to practices, however, they are much closer to technical terminologies and technical life than core components.
They are guidelines that govern technical design, architecture, software engineering lifecycles, standards and development environments.
If we take a high level view of software engineering, we can clearly see that application engineering as the mainstream branch is far more mature than its subbranches. This is a natural course and the result of the fact that the mainstream branch has the longest history and the largest community.
Data and AI subgroups on the other hand are the youngsters in the family with quantum computing being the toddler. Yet, new disciplines adopt their practices from mainstream ones and this can make the former resilient.
Let’s look at some examples.
Software engineering practices in data
Those of you who have trading experience are familiar with the concept of leading indicators. These are signals used to predict certain future events typically in an economic context.
When I am looking at the timeline of data’s adoption of some practices, I cannot help but think about leading indicators. I would even go on to refer to the current stage as the software engineering practices adoption era.
Test-driven development (TDD):
- In software engineering, it started back in 2002 with Kent Beck’s book Test Driven Development: By Example;
- In data engineering, the tool that enabled data engineering to adopt this practice was dbt with its release in 2016.
Continuous integration / continuous deployment (CI/CD):
- In software engineering, this practice again has its roots in a book, the 2010 Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Jez Humble and David Farley, published around the same time Jenkins came about;
- On the data side, the closest phenomenon we recognise is DataOps, which was first mentioned in an article from 2014, became popular around 2017 and appeared on Gartner’s radar in 2018.
Infrastructure as Code (IoC):
- Puppet was founded in 2005, AWS Cloudformation was released in 2011 and Terraform came about in 2014;
- Probably it is a good enough assumption that severalyears after the main product release from the data vendors, they also release the respective terraform module. Both Snowflake and Databricks started in 2020.
Domain-driven design (DDD): where we should note this in an approach rather than a practice
- For software engineering, in 2003, Eric Evans published his book Domain-Driven Design: Tackling Complexity in the Heart of Software;
- For data, it was 16 years later that Zhamak Dehghani published an article titled “How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh” which was data engineering’s reaction to DDD and was followed by a fundamental tool support by dbt in 2023.
The practices I described above we introduced over 20 years ago. Enough time has passed for the data industry to learn its lessons and even start measuring performance.
The DevOps Research and Assessment (DORA) programme reports that the practices mentioned above have been contributing the most to the success of software engineering projects.
As I have summarised above, in data, we can see a lag, which spans from a couple of years in some areas to a significant period in others. Despite this lag, the very process of adoption of said practices has begun. There isn’t any report in data, such as DORA in the “general” software engineering, that would show what are the success criteria for data-specific projects, but why would it be that much different?
How to use the newly adopted practices in talent strategy?
Talent Strategy consists of the following components:
- Recruiting and retaining talent;
- Learning and talent development;
- Performance management.
What these three pillars have in common is the methodology to determine seniority at a company.
Every discussion around talent development revolves around the same questions: What are the requirements for the levels we are hiring for? How does the company empower the employee to become more senior? What are the assessment criteria for the level in question?
The incorrect seniority calibration (a.k.a. Progression framework) can spawn the wrong culture, inequity and churn, all of which are the opposite of future-proof.
This is where practices come in. If we integrate software engineering practices in the progression framework’s criteria system, we are going to have successful formulas in place.
This will effectively prevent seniority being equated to knowing the technical documentation of a product by heart or having spend 15 years using a specific tool.
Here is how we can successfully leverage the progression framework in the three main areas:
Recruitment
The more overlap we have between recruitment criteria and progression framework, the less friction we are going to see later on. Recruitment should be an ultrafast edition of the (past) performance assessment. If we do not synchronise it, we can easily misjudge the seniority and the maturity of the person, which not just makes the onboarding for new joiners difficult but also causes frustration among the internal employees.
Performance management
When it comes to seniority, a fair measurement involves everyone being assessed against criteria of genuine contribution to productivity on the product development or service delivery side. Here, inequities can cause compensation differences and make people even more frustrated. Of course, a performance framework is comprised of many components, and here I have not covered the company’s cultural and organisational aspects.
Learning and development
Placing the progression framework at the heart of the learning and development strategy helps employees adopt the right skills that also enable them to advance their careers more effectively. It shows guidance on where and how the employee can have the biggest impact on the organisation.
Here is a pragmatic example: Company X has a long-term goal to build a future-proof engineering team. Based on this, the talent management strategy emphasises the importance of multiple, proven software engineering best practices such CI/CD. As a consequence, CI/CD is allocated significant weight in the progression framework, which is transparent to the engineers.
Engineers are aware that the more they follow the progression framework, the more impact they will have on the organisation. So they prefer to learn CI/CD as opposed to a new programming language and they try to apply the practice as much as they can on their projects. As a result, they deliver high-quality products, they focus their time on future-proof initiatives and ultimately they make a bigger impact on their company than they would have if they had focused on something else.
Let’s talk
Setting up the right foundations is essential for a long-term, future-proof talent strategy. The data industry has started adopting some software engineering practices and it is important to become and remain conscious of them. The success of those practices is already proven, as well as their robustness against changes.
Integrating these practices into the core of the talent development strategy will contribute to a consistent and equitable company culture, which is essential for building a future-proof tech team.
If you are ready to start talking about your talent development strategy, get in touch.
Meanwhile, make sure to read up on how an effective tech strategy helps improve diversity in data.