In this edition of Technology Radar, we’ve highlighted several trends that are shaping the software development landscape, as well as over a hundred individual technologies. In this edition of “macro trends,” I’ll explore some of these themes alongside some stuff we discussed but didn’t make it into the radar. And yes, I will be talking about AI tools — but that’s at the end of this article, hopefully giving you a brief respite from AI hype.
Next-generation knowledge management tools for teams
Effective knowledge management is obviously important for teams to stay competitive and agile. Traditional tools like wikis have been commonly used to store and share information, but recently there has been interest in more advanced and versatile knowledge management systems. These new tools, such as Logseq and Obsidian, offer innovative features that are transforming the way teams manage, access and engage with their collective knowledge.
In the case of Logseq, we have an open-source knowledge-management system powered by a graph database. It helps users organize thoughts, notes and ideas and can be adapted for team use with Git-based storage. Logseq’s approach enables teams to build a democratic and accessible knowledge base, providing each member with a personalized learning journey and facilitating efficient onboarding. Its paragraph-based linking feature allows team members to quickly find the relevant context without having to read an entire article, emphasizing its focus on consumption. However, as with any knowledge management tool, teams will need to apply good curation and management practices to avoid information overload or disorganization.
Similarly, Obsidian offers robust knowledge management capabilities, but it has a stronger emphasis on content creation. While both tools provide valuable features for managing and accessing information, their differences highlight the evolving nature of team knowledge management.
AI model validation, data quality and quality assurance tools
As artificial intelligence and machine learning models become increasingly prevalent in the tech industry, ensuring their quality, reliability and ethics have gained increasing importance, too. This also extends to the quality and observability of the data used in these models. Tools like Evidently, Giskard, pandera, Deepchecks and Soda Core help teams to monitor and validate their AI models — and the underlying data — more effectively and with greater transparency.
Evidently is an open-source Python tool that focuses on monitoring machine learning models throughout their lifecycle, with an emphasis on model drift detection. Its built-in metrics, visualizations and tests offer robust model quality and data quality inspection capabilities, making it a versatile tool for maintaining stable production operations. Giskard, on the other hand, is designed to help organizations build more robust and ethical AI models by providing quality assurance features with a focus on explainability and fairness. Soda Core offers an open-source data quality and observability solution, enabling teams to validate data at various stages of processing and to set up automated monitoring checks for anomalies. The domain-specific language (DSL) SodaCL makes it easier for team members beyond data engineers to write quality checks, making the work of maintaining data integrity more collaborative. Pandera provides a readable API for expressing data validation rules, while Deepchecks is a platform for continuous ML validation.
What’s interesting is that the ‘space’ for these tools isn’t yet particularly well defined. All of these tools address different but overlapping areas; a consistent orthogonal set of tools hasn’t yet emerged. That said, they’re helping to ensure AI models and their underlying data are better aligned with business objectives, while also addressing critical quality issues before production deployment.
Developer experience, team onboarding and developer journeys for enhanced efficiency
Focusing on engineering effectiveness as a systemic concept has become crucial for delivering value from idea to software-in-production. However, our discussions showed that individual developer and team effectiveness also play a significant role in the overall efficiency of engineering processes. As a result, there’s a growing interest in examining developer experience (DX), team onboarding and developer journeys as a way of improving efficiency and fostering greater collaboration.
During the Radar process, we spent some time discussing developer experience research, which involves the intentional analysis of the various factors that influence a developer’s satisfaction and productivity. Measuring productivity is notoriously difficult, with simplistic measures like lines of code or commits per day having obvious drawbacks. Other measures, such as “story points per iteration,” are also relatively easy to game and aren’t really comparable across teams. While numerous tools are emerging to help assess developer productivity, they should not be seen as a replacement for actual research: gathering opinions and bringing cross-functional groups together is essential. To obtain a comprehensive understanding of the current state, it’s vital to gather both quantitative and qualitative data from different personas, such as managers, senior and junior engineers and tenured and new employees. This approach will uncover underlying cultural or friction points that may not be initially obvious.
Another technique, “developer experience mapping,” focuses on capturing the pain points of developer teams throughout their journey within an organization. By visually representing the developer journey and identifying areas for improvement, this technique fosters better communication, prioritization and collaboration within the team. These techniques didn’t reach the Radar – they mostly ended up “too complex to blip” and will become podcasts or articles in the future – but they are both potentially useful when trying to empower engineering teams, enhance productivity and understand the factors that contribute to an effective and efficient development process.
Managing tech debt and system health through collaborative decision-making
Managing tech debt and maintaining system health are essential for the long-term success of any product or system. Tech debt has beenin the news cycle over the last six months, but it’s certainly not a new concept. We’re happy that it’s being discussed, but ultimately managing tech debt is not rocket science: good product managers and tech leads should already be considering cross-functional requirements, including tech debt management. Fitness functions can identify and measure important quality characteristics, and we can describe tech debt in terms of how it may improve those characteristics.
An alternative perspective that we quite like is to reframe tech debt as a matter of system health, focusing on the overall well-being of software systems. This approach emphasizes the importance of tracking and managing system health in a way that balances engineering and product management perspectives.
The rise of feature toggle tools and open standards
As organizations increasingly adopt modern development practices, feature toggle tools have gained prominence for managing and controlling the rollout of new functionality. During the Radar meeting we discussed a variety of feature toggle-related tools, showcasing the growing interest and demand in this area.
Unleash, an open-source feature management solution, was popular with some of our teams due to its support for multiple hosting options, SDKs and languages. With role-based access control, SSO and flexible segmentation mechanisms, Unleash offers a comprehensive solution for teams. Flagsmith, another feature toggle tool, was recognized for its impressive development activity and open-source nature. Although it narrowly missed being featured in this edition, it remains a noteworthy contender in the space.
The OpenFeature initiative aims to establish an open standard for feature flag management. This unified API and SDK, with its developer-first and cloud-native approach, promises extensibility for both open-source and commercial offerings. OpenFeature has the potential to provide a vendor-neutral interface for feature flagging, streamlining the process for developers and organizations. We think this is worth keeping an eye on as more feature toggle tools continue to emerge.
Reevaluating the role of low-code platforms
As low-code and no-code platforms continue to evolve and mature — and especially because these tools are likely to be augmented with AI enabling them to produce applications faster or for less expert users — we decided to reiterate our advice around bounded low-code platforms. We remain skeptical because the vendor claims around these tools are, basically, dangerously optimistic. There are no silver bullets and a low-code platform should always be evaluated in context as a potential solution, not used as a default option.
The key to successfully utilizing low-code platforms lies in understanding their “sweet spot” and deploying them responsibly in targeted scenarios. Examples of such use-cases include automating simple workflows, implementing non-critical backend processes and creating prototypes for testing or validating ideas. In these situations, low-code platforms can provide fast and cost-effective solutions without the need for extensive custom development. These things aren’t going away; if anything, we expect the flourishing AI landscape to reinvigorate the low-code conversation.
Navigating the AI revolution in software development
So, AI. The group that builds the radar is over twenty senior technologists from around the company, representing more than 18 countries and 12,500 Thoughtworkers. This primed us for an in-depth discussion on AI-related technologies, starting with a proposal to assess “AI pairing” as a technique. Ultimately we rejected the idea. Despite GitHub labeling their Copilot tool as “an AI pair programmer,” we must emphasize the difference between AI tools providing knowledge and programmers offering wisdom. While AI can be a valuable coding assistant, it is no substitute for human pairing. With AI, the human remains in the driver’s seat, which contrasts with the best pairing practices where drivers rotate between pairs.
ChatGPT is a powerful tool, and the new GPT-4 model contains some significant improvements over the older version. In China, our teams have devised an intriguing approach to use AI to create software, without feeding confidential code to the AI: they first explain the architectural style of the system, then ask ChatGPT to create a feature list and then finally ask it to create tests and then code for each feature. We experimented with using ChatGPT to help us write radar blips and my feeling is it was helpful, especially to produce a first draft of the text. I was able to take discussion notes and have ChatGPT summarize them in one or two paragraphs, and the result was generally reasonable. I frequently found I had to tone down overly optimistic language created by the tool – ChatGPT apparently thinks all software tools are amazing and revolutionary – but otherwise got a useful first draft from it, ready for group review. The process didn’t feel radically faster than writing by hand, but it did help the process ‘flow’ more reliably, avoiding writer’s block or the intimidation of a blank page.
Current AI tools raise intellectual property concerns such as what data they were trained on and the legal status of any code or content produced by the tool. We expect these will ease over time; it’s in the interests of both the producers and consumers of these tools to resolve the current issues, and we will undoubtedly gain greater clarity as various legal challenges work their way through the courts. However, organizations should still seek legal advice when using AI tools, as some license agreements state that future versions of the AIs may incorporate any prompts, business context or code that has been submitted to them. We anticipate the arrival of “for business” versions of AI tools, which will likely clarify IP issues.
Despite these potential concerns, we believe organizations should seriously investigate AI tooling and add it to their toolbox. As with past tech revolutions like Agile, cloud, continuous delivery and DevOps, there will be a ‘right’ way to maximize AI tools’ potential across the software development lifecycle. Adapting our ways of working is essential to avoid pitfalls, such as over-reliance on AI-generated results that may appear plausible but contain hidden errors. Ensuring AI-augmented development processes maintain high-quality software output is a challenge, which we (and the industry) must address. The AI revolution represents a major shift for the IT industry. We are excited about it, and we know Thoughtworks will write more about this topic in the coming months as we learn about what works and the best way to do things.
With thanks to Marisa Hoenig, Rebecca Parsons, Pawan Shah and Scott Shaw for their input to draft versions of this article.