<<

TECHNOLOGY RADAR An opinionated guide to technology frontiers

Vol.22 thoughtworks.com/radar #TWTechRadar The Technology Advisory Board (TAB) is a group of 20 or so senior technologists at ThoughtWorks. The TAB meets twice a year face-to-face and bi-weekly by phone. Its Contributors primary role is to be an advisory group for ThoughtWorks CTO, Rebecca Parsons. The TAB acts as a broad body that can look at topics that affect technology and The Technology Radar is prepared by the technologists at ThoughtWorks. We usually create the Radar at face-to-face meetings, but given the global pandemic we’ve been living through, this is the first Technology Radar to ThoughtWorks Technology Advisory Board be created via a virtual event.

Rebecca Martin Fowler Bharani Birgitta Camilla Erik Evan Fausto Hao Ian Parsons (CTO) (Chief Scientist) Subramaniam Böckeler Crispim Dörnenburg Bottcher de la Torre Xu Cartwright

James Jonny Lakshminarasimhan Mike Neal Ni Rachel Scott Shangqi Zhamak Lewis LeRoy Sudarshan Mason Ford Wang Laycock Shaw Liu Dehghani

TECHNOLOGY RADAR | 2 © ThoughtWorks, Inc. All Rights Reserved. About the Radar

ThoughtWorkers are passionate about technology. We build it, research it, test it, open source it, write about it, and constantly to improve it — for everyone. Our mission is to champion software excellence and revolutionize IT. We create and share the ThoughtWorks Technology Radar in support of that mission. The ThoughtWorks Technology Advisory Board, a group of senior technology leaders at ThoughtWorks, creates the Radar. They meet regularly to discuss the global technology strategy for ThoughtWorks and the technology trends that significantly impact our industry.

The Radar captures the output of the Technology Advisory Board’s discussions in a format that provides value to a wide range of stakeholders, from developers to CTOs. The content is intended as a concise summary.

We encourage you to explore these technologies. The Radar is graphical in nature, grouping items into techniques, tools, platforms and languages & frameworks. When Radar items could appear in multiple quadrants, we chose the one that seemed most appropriate. We further group these items in four rings to reflect our current position on them.

For more background on the Radar, see thoughtworks.com/radar/faq.

TECHNOLOGY RADAR | 3 © ThoughtWorks, Inc. All Rights Reserved. New

Moved in/out Radar at No change

Our Radar is forward looking. To make a glance room for new items, we fade items that haven’t moved recently, which isn’t a reflection on their value but rather on The Radar is all about tracking interesting our limited Radar real estate. things, which we refer to as blips. We organize the blips in the Radar using two categorizing elements: quadrants and rings. The quadrants represent different kinds of blips. The rings indicate what stage in an adoption lifecycle we think they should be in. Hold Assess TrialTAdoptAdopt rial Assess Hold A blip is a technology or technique that plays a role in software development. Blips Adopt are things that are ‘in motion’ — that is we find their position in the Radar is changing We feel strongly that the industry should — usually indicating that we’re finding be adopting these items. We use them increasing confidence in them as they move when appropriate on our projects. through the rings. Trial Worth pursuing. It’s important to understand how to build up this capability. Enterprises can try this technology on a project that can handle the risk.

Assess Worth exploring with the goal of understanding how it will affect your enterprise.

Hold Proceed with caution.

TECHNOLOGY RADAR | 4 © ThoughtWorks, Inc. All Rights Reserved. Themes for this edition

The Elephant in the Zoom X is Software Too Data Perspectives Maturing Kubernetes & Co. Cambrian and Expanding Explosion “Necessity is the mother of invention” We often encourage other parts of the — Proverb software delivery ecosystem to adopt A theme that spanned many blips and As Kubernetes continues to consolidate its beneficial engineering practices pioneered quadrants in this edition concerned market dominance, the inevitable supporting Many companies have experimented by agile software development teams; we maturity in data, particularly techniques ecosystem thrives. We discussed a number with the idea of remote working as the return to this topic so often because we and tools surrounding analytical data of blips surrounding Kubernetes in the technology to enable it has slowly matured. keep finding niches where we see slow and machine learning. We note many tools, platforms and techniques quadrants, But suddenly, a global pandemic has forced progress on this advice. For this Radar, we continuing innovations in the natural showing just how pervasive this subject companies all over the world to rapidly decided to call out again infrastructure as language processing (NLP) space. We also has become. For example, Lens and k9s and fundamentally change their way of code as well as pipelines as code, and we welcome both the emergence and continuing simplify cluster management, kind helps with working to preserve some productivity. As also had a number of conversations about maturity of full-lifecycle machine learning local testing and Gloo offers an alternative many have observed, “working from home” infrastructure configurations, ML pipelines tool suites, combining enduring engineering API Gateway. Hydra is an OAuth server is starkly different from “being forced to and other related areas. We find that the practices with combinations of tools that optimized to run on Kubernetes, and Argo work from home during a pandemic,” and teams who commonly own these areas do work well in an iterative manner, showing CD uses Kuberenetes native desired-state we think there will be a journey ahead to not embrace enduring engineering practices that “machine learning is software too.” management to implement a CD server. become fully productive in this new context. such as applying software design principles, Finally, for distributed architectures such as These developments indicate Kubernetes automation, continuous integration, testing, microservices, we see great interest in data is perfectly poised to create a supporting We’ve never believed that creating a Radar and so on. We understand that many mesh as a way to effectively serve and use ecosystem; it offers critical capabilities but remotely was possible, and yet here we are factors hamper fast movement for some analytical data at scale in distributed systems. with abstractions that are often too low — this is the first Radar we’ve ever produced engineering practices: complexity (both As the industry thinks more diligently about level or advanced for most users. Thus, without meeting in person. Many of the essential and accidental), lack of knowledge, how data should work in modern systems, the complexity void fills with tooling to proposed blips spoke to the pressing need political impediments, lack of suitable tooling we’re encouraged by the general direction either ease the configuration and the use to enable first-class remote collaboration. and many others. However, the benefits to and opening perspectives in this arena and of Kubernetes or supply something missing We didn’t want to ignore the elephant in the organizations that embrace agile software expect to see exciting innovations in the near from the core functionality. As Kubernetes room and not comment on the crisis, but delivery practices are clear and worth some future. continues to dominate, we see a rich doing a good job of remote-first collaboration effort to achieve. ecosystem growing and expanding to take is a deep and nuanced subject and certainly advantage of its strengths and address its not all of our advice would fit in the Radar weaknesses. As this ecosystem matures, format. So alongside this edition you’ll find a we expect it to evolve toward a new set of podcast where we discuss our experiences higher-level abstractions offering the benefits in creating the Radar remote-first, a written of Kubernetes without the bewildering range experience report including advice on of options. remote-first productivity, a webinar covering tech strategies in a crisis and links to other ThoughtWorks materials, including our remote working playbook. We hope that these materials, together with other internet resources, will help organizations that attempt to navigate these unknown waters.

TECHNOLOGY RADAR | 5 © ThoughtWorks, Inc. All Rights Reserved. Tools Adopt Adopt 1. Applying product management to internal 50. Cypress platforms 51. Figma 2. Infrastructure as code 3. Micro frontends The Radar Trial 4. Pipelines as code 52. Dojo 5. Pragmatic remote pairing 53. DVC 6. Simplest possible feature toggle 54. Experiment tracking tools for machine learning Trial 55. Goss 7. Continuous delivery for machine learning (CD4ML) 56. Jaeger 8. Ethical bias testing 27 57. k9s 9. GraphQL for server-side resource aggregation 58. kind Techniques 10. Micro frontends for mobile 26 59. mkcert 11. Platform engineering product teams 60. MURAL 12. Security policy as code 22 68 61. Open Policy Agent (OPA) 13. Semi-supervised learning loops 67 62. Optimal Workshop 14. Transfer learning for NLP 21 69 63. Phrase 70 15. Use “remote native” processes and approaches 25 64. ScoutSuite 16. Zero trust architecture (ZTA) 65. Visual regression testing tools 66. Visual Studio Live Share 20 52 Assess 54 71 17. Data mesh 15 Assess 16 72 18. Decentralized identity 13 53 55 67. Apache Superset 24 19 19. Declarative data pipeline definition 14 58 68. AsyncAPI 20. DeepWalk 12 57 73 69. ConfigCat 21. Managing stateful systems via container 56 59 70. Gitpod orchestration 18 71. Gloo 11 74 22. Preflight builds 10 72. Lens 6 60 62 61 73. Manifold Hold 23 5 74. Sizzy 9 23. Cloud lift and shift 63 75 75. Snowpack 24. Legacy migration feature parity 17 50 76. tfsec 8 64 25. Log aggregation for business analytics 3 4 51 65 76 26. Long-lived branches with Gitflow 1 Hold 7 2 66 27. Snapshot testing only

Hold Assess Trial Adopt Adopt Trial Assess Hold Languages & Frameworks 28 89 Adopt 39 30 79 Adopt 77. React Hooks 28. .NET Core 97 29. Istio 87 88 78. React Testing Library 31 78 79. Vue.js Trial 40 77 30. Anka 32 29 96 Trial 41 86 31. Argo CD 33 80. CSS-in-JS 32. Crowdin 34 85 81. Exposed 33. eBPF 83 82. GraphQL Inspector 84 95 34. Firebase 42 83. Karate 35. Hot Chocolate 35 81 84. Koin Platforms 36. Hydra 43 37 94 85. NestJS 37. OpenTelemetry 36 80 82 86. PyTorch 38 38. Snowflake 87. Rust 88. Sarama 44 93 Assess 92 89. SwiftUI 39. Anthos 49 45 98 40. Apache Pulsar Assess 46 47 48 91 41. Cosmos 90. Clinic.js Bubbleprof 42. BigQuery ML 90 91. Deequ 43. JupyterLab 92. ERNIE 44. Marquez 93. MediaPipe 45. Matomo 94. Tailwind CSS 46. MeiliSearch 95. Tamer 47. Stratos 96. 48. Trillian 97. XState

Hold Hold 49. Node overload 98. Enzyme

New Moved in/out No change TECHNOLOGY RADAR Vol. 22 Techniques Adopt 27 1. Applying product management Techniques to internal platforms 26 2. Infrastructure as code 3. Micro frontends Applying product management 22 68 4. Pipelines as code to internal platforms 67 5. Pragmatic remote pairing Adopt 21 69 6. Simplest possible feature 70 25 toggle More and more companies are building internal platforms to roll out new digital Trial 20 52 solutions quickly and efficiently. Companies 54 7. Continuous71 delivery for that succeed with this strategy are applying 15 16 machine72 learning (CD4ML) product management to internal platforms. 13 53 55 24 19 8. Ethical bias testing This means establishing with 14 58 12 57 9. GraphQL for server-side internal consumers (the development teams) 73 56 resource aggregation and collaborating with them on the design. 59 10. Micro frontends for mobile Platform product managers create roadmaps 18 11 11. Platform engineering74 product 10 and ensure the platform delivers value to 6 60 teams62 61 the business and enhances the developer 23 5 12. Security policy as code experience. Unfortunately, we’re also 9 13. Semi-supervised63 75 learning loops seeing less successful approaches, where 17 50 14. Transfer learning for NLP teams create a platform in the void, based 8 64 3 4 51 15. Use 65“remote native” 76processes on unverified assumptions and without 1 and approaches 7 2 internal customers. These platforms, often 16.66 Zero trust architecture (ZTA) despite aggressive internal tactics, end Hold Assess Trial Adopt Adopt Trial Assess Hold up being underutilized and a drain on the Assess organization’s delivery capability. As usual, 17. Data mesh 28 89 good product management is all about 39 30 79 18. Decentralized identity building products that consumers love. 19. Declarative data pipeline97 maintainability, and using automated allowed teams to scale the delivery of 88 78 87 definition testing and deployment are all critical 31 independently deployed and maintained 20. DeepWalk practices. Those of us with a40 deep software services. Unfortunately, we’ve also seen 77 Infrastructure as code 29 21. Managing stateful systems via and infrastructure background need to 32 many teams create a front-end monolith — 96 Adopt 41 86 container orchestration empathize with and support colleagues 33a large, entangled browser application that 34 22. Preflight builds Although infrastructure as code is a who do not. Saying “treat infrastructure like sits on top of the back-end services — largely 85 code” isn’t enough; we need to ensure the neutralizing the benefits of microservices. 83 relatively old technique (we’ve featured 84 Hold 95 hard-won learnings from the software42 world Micro frontends have continued to gain in it in the Radar in 2011), it has become 35 23. Cloud lift and shift are also applied consistently throughout the popularity since they were first introduced. 81 vitally important in the modern cloud era 43 37 24. Legacy 94migration feature parity infrastructure realm. We’ve seen many teams adopt some form where the act of setting up infrastructure 36 80 82 25. Log aggregation for business of this architecture as a38 way to manage has become the passing of configuration analytics the complexity of multiple developers instructions to a cloud platform. When we 44 26. Long-lived branches with Micro frontends and teams contributing to the same user 93 say “as code” we mean that all the good 92 Gitflow experience. In June of last year, one of the 98 practices we’ve learned in the software Adopt 49 45 27. Snapshot testing only originators of this technique published an world should be applied to infrastructure. 46 47 48 91 Using source control, adhering to We’ve seen significant benefits from introductory article that serves as a reference the DRY principle, modularization, introducing microservices, which have for micro frontends. It shows how this style90

8 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. can be implemented using various web in this situation we recommend pragmatic is more than just training models and Techniques programming mechanisms and builds out remote pairing: adjusting pairing practices serving them. It requires implementing end- an example application using React.js. We’re to what’s possible given the tools at hand. to-end and continuously repeatable cycles confident this style will grow in popularity Consider tools such as Visual Studio Live of training, testing, deploying, monitoring as larger organizations try to decompose UI Share for efficient, low-latency collaboration. and operating the models. Continuous We firmly believe in pair development across multiple teams. Only resort to pixel-sharing if both participants delivery for machine learning (CD4ML) is programming. But in a reside in relative geographic proximity and a technique that enables reliable end-to- post COVID-19, remote- have high-bandwidth internet connections. end cycles of development, deploying and working world, successful Pipelines as code Pair developers who are in similar time zones monitoring machine learning models. The pairing requires a healthy Adopt rather than expecting pairing to work between underpinning technology stack to enable dose of pragmatism to be participants regardless of their location. If CD4ML includes tooling for accessing The pipelines as code technique emphasizes pairing isn’t working for logistical reasons, and discovering data, version control of effective. that the configuration of delivery pipelines fall back to practices such as individual artefacts (such as data, model and code), that build, test and deploy our applications programming augmented via code reviews, continuous delivery pipelines, automated (Pragmatic remote pairing) or infrastructure should be treated as pull-request collaboration (but beware long- environment provisioning for various code; they should be placed under source lived branches with Gitflow) or shorter pairing deployments and experiments, model control and modularized in reusable sessions for critical parts of the code. We’ve performance assessment and tracking, and We encourage you components with automated testing and engaged in remote pairing for years, and model operational observability. Companies to use the simplest deployment. As organizations move to we’ve found it to be effective if done with a can choose their own tool set depending on possible feature toggle decentralized autonomous teams building dose of pragmatism. their existing tech stack. CD4ML emphasizes instead of unnecessarily microservices or micro frontends, the need automation and removing manual handoffs. for engineering practices in managing complex feature toggle CD4ML is our de facto approach for pipelines as code increases to keep building developing ML models. frameworks. Simplest possible feature toggle and deploying software consistent within Adopt the organization. This need has given rise (Simplest possible feature to delivery pipeline templates and tooling Unfortunately, feature toggles are less toggle) Ethical bias testing that enable a standardized way to build and common than we’d like, and quite often Trial deploy services and applications. Such tools we see people mixing up its types and use use the declarative delivery pipelines of cases. It’s quite common to come across Over the past year, we’ve seen a shift in applications, adopting a pipeline blueprint teams that use heavyweight platforms interest around machine learning and deep to execute the underlying tasks for various such as LaunchDarkly to implement neural networks in particular. Until now, stages of a delivery lifecycle such as build, feature toggles, including release toggles, tool and technique development has been test and deployment; and they abstract away to benefit from Continuous Integration, driven by excitement over the remarkable implementation details. The ability to build, when all you need are if/else conditionals. capabilities of these models. Currently, test and deploy pipelines as code should be Therefore, unless you need A/B testing though, there is rising concern that these one of the evaluation criteria for choosing a or canary release or hand over feature models could cause unintentional harm. CI/CD tool. release responsibility to business folks, we For example, a model could be trained encourage you to use the simplest possible inadvertently to make profitable credit feature toggle instead of unnecessarily decisions by simply excluding disadvantaged Pragmatic remote pairing complex feature toggle frameworks. applicants. Fortunately, we’re seeing a Adopt growing interest in ethical bias testing that will help to uncover potentially harmful We firmly believe that pair programming Continuous delivery for machine decisions. Tools such as lime, AI Fairness 360 improves the quality of code, spreads learning (CD4ML) or What-If Tool can help uncover inaccuracies knowledge throughout a team and allows Trial that result from underrepresented groups overall faster delivery of software. In a post in training data and visualization tools such COVID-19 world, however, many software Applying machine learning to make the as Google Facets or Facets Dive can be teams will be distributed or fully remote, and business applications and services intelligent used to discover subgroups within a corpus

TECHNOLOGY RADAR | 9 © ThoughtWorks, Inc. All Rights Reserved. of training data. We’ve used lime (local presents the challenge of maintaining team disruption. For example, access control interpretable model-agnostic explanations) autonomy while integrating their work into policies define and enforce who can access Techniques in addition to this technique in order to a single app. Although we’ve seen teams which services and resources under what understand the predictions of any machine- writing their own frameworks to enable this circumstances; or network security policies learning classifier and what classifiers (or development style, existing modularization can dynamically limit the traffic rate to a models) are doing. frameworks such as Atlas and Beehive can particular service. The complexity of the There is rising concern also simplify the problem of integrating technology landscape today demands that some machine- multiteam app development. treating security policy as code: define learning models could GraphQL for server-side and keep policies under version control, cause unintentional harm. resource aggregation automatically validate them, automatically Fortunately, we’re seeing a Trial Platform engineering product deploy them and monitor their growing interest in ethical performance. Tools such as Open Policy teams bias testing that will help We see more and more tools such as Apollo Trial Agent or platforms such as Istio provide Federation that can aggregate multiple flexible policy definition and enforcement to uncover potentially GraphQL endpoints into a single graph. The adoption of cloud and DevOps — while mechanisms that support the practice of harmful decisions. However, we caution against misusing increasing the productivity of teams who security policy as code. GraphQL, especially when turning it into a can now move more quickly with reduced (Ethical bias testing) server-to-server protocol. Our practice is dependency on centralized operations teams to use GraphQL for server-side resource and infrastructure — also has constrained Semi-supervised learning loops aggregation only. When using this pattern, teams that lack the skills to self-manage a Trial the microservices continue to expose well- full application and operations stack. Some Micro frontends have defined RESTful APIs, while under-the-hood organizations have tackled this challenge by Semi-supervised learning loops are a class been widely adopted for aggregate services or BFF (Backend for creating platform engineering product teams. of iterative machine-learning workflows that Web UIs. Now we’re seeing Frontends) patterns use GraphQL resolvers These teams maintain an internal platform take advantage of the relationships to be this architectural style found in unlabeled data. These techniques as the implementation for stitching that enables delivery teams to deploy and come to mobile too. resources from other services. The shape operate systems with reduced lead time and may improve models by combining labeled of the graph is driven by domain-modeling stack complexity. The emphasis here is on API- and unlabeled data sets in various ways. In (Micro frontends for mobile) exercises to ensure ubiquitous language driven self-service and supporting tools, with other cases they compare models trained is limited to subgraphs where needed (in delivery teams still responsible for supporting on different subsets of the data. Unlike the case of one-microservice-per-bounded- what they deploy onto the platform. either unsupervised learning where a context). This technique simplifies the Organizations that consider establishing such machine infers classes in unlabeled data or internal implementation of aggregate a platform team should be very cautious not supervised techniques where the training services or BFFs, while encouraging good to accidentally create a separate DevOps set is entirely labeled, semi-supervised modeling of services to avoid anemic REST. team, nor should they simply relabel their techniques take advantage of a small set existing hosting and operations structure of labeled data and a much larger set of as a platform. If you’re wondering how to unlabeled data. Semi-supervised learning Micro frontends for mobile best set up platform teams, we’ve been is also closely related to active learning Trial using the concepts from Team Topologies techniques where a human is directed to to split platform teams in our projects into selectively label ambiguous data points. Since introducing it in the Radar in 2016, enablement teams, core “platform within a Since expert humans that can accurately we’ve seen widespread adoption of micro platform” teams and stream-focused teams. label data are a scarce resource and frontends for web UIs. Recently, however, labeling is often the most time-consuming we’ve seen projects extend this architectural activity in the machine-learning workflow, style to include micro frontends for mobile Security policy as code semi-supervised techniques lower the cost applications as well. When the application Trial of training and make machine learning becomes sufficiently large and complex, feasible for a new class of users. We’re also it becomes necessary to distribute the Security policies are rules and procedures seeing the application of weakly supervised development over multiple teams. This that protect our systems from threats and techniques where machine-labeled data

10 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. is used but is trusted less than the data and “everybody always remote” meetings age-old assumption that we must centralize Techniques labeled by humans. are other approaches our teams practice by big analytical data to use it, have data all in default to optimize for location fluidity. one place or be managed by a centralized data team to deliver value. Data mesh Transfer learning for NLP claims that for big data to fuel innovation, Zero trust architectures Trial Zero trust architecture (ZTA) its ownership must be federated among stress the importance of Trial domain data owners who are accountable securing all access and We had this technique in Assess previously. for providing their data as products (with communications and The innovations in the NLP landscape The technology landscape of organizations the support of a self-serve data platform to enforcing policies as continue at a great pace, and we’re able to today is increasingly more complex with abstract the technical complexity involved in assets — data, functions, infrastructure and code based on the least leverage these innovations in our projects serving data products); it must also adopt a thanks to the ubiquitous transfer learning users — spread across security boundaries, new form of federated governance through privilege. for NLP. The GLUE benchmark (a suite such as local hosts, multiple cloud providers automation to enable interoperability of language understanding tasks) scores and a variety of SaaS vendors. This demands of domain-oriented data products. (Zero trust architecture (ZTA)) have seen dramatic progress over the past a paradigm shift in enterprise security Decentralization, along with interoperability couple of years with average scores moving planning and systems architecture, moving and focus on the experience of data from 70.0 at launch to some of the leaders from static and slow-changing security policy consumers, are key to the democratization of crossing 90.0 as of April 2020. A lot of our management, based on trust zones and innovation using data. projects in the NLP domain are able to network configurations, to dynamic, fine- make significant progress by starting from grained security policy enforcement based on If your organization has a large number of pretrained models from ELMo, BERT, and temporal access privileges. domains with numerous systems and teams ERNIE, among others, and then fine-tuning generating data or a diverse set of data-driven them based on the project needs. Zero trust architecture (ZTA) is an use cases and access patterns, we suggest organization’s strategy and journey to you assess data mesh. Implementation of implement zero-trust security principles data mesh requires investment in building a Use “remote native” processes for all of their assets — such as devices, self-serve data platform and embracing an and approaches infrastructure, services, data and users — organizational change for domains to take and includes implementing practices such on the long-term ownership of their data Trial as securing all access and communications products, as well as an incentive structure that Distributed teams come in many shapes and regardless of the network location, enforcing rewards domains serving and utilizing data as setups; delivery teams in a 100% single-site policies as code based on the least privilege a product. co-located setup, however, have become the and as granular as possible, and continuous exception for us. Most of our teams are either monitoring and automated mitigation of multisite teams or have at least some team threats. Our Radar reflects many of the Decentralized identity members working off-site. Therefore, using enabling techniques such as security policy Assess “remote native” processes and approaches by as code, sidecars for endpoint security and default can help significantly with the overall BeyondCorp. If you’re on your journey toward Since the birth of the internet, the technology team flow and effectiveness. This starts with ZTA, refer to the NIST ZTA publication to learn landscape has experienced an accelerated making sure that everybody has access to more about principles, enabling technology evolution toward decentralization. While the necessary remote systems. Moreover, components and migration patterns as well as protocols such as HTTP and architectural using tools such as Visual Studio Live Share, Google’s publication on BeyondProd. patterns such as microservices or data mesh MURAL or Jamboard turn online workshops enable decentralized implementations, and remote pairing into routines instead of identity management remains centralized. The ineffective exceptions. But “remote native” Data mesh emergence of distributed ledger technology goes beyond a lift-and-shift of co-location Assess (DLT), however, provides the opportunity to practices to the digital world: Embracing more enable the concept of decentralized identity. In asynchronous communication, even more Data mesh is an architectural and a decentralized identity system, entities — that discipline around decision documentation, organizational paradigm that challenges the is, discrete identifiable units such as people,

TECHNOLOGY RADAR | 11 © ThoughtWorks, Inc. All Rights Reserved. organizations and things — are free to use any DeepWalk the CI on a master branch can be ineffective shared root of trust. In contrast, conventional Assess if the team is too big, the builds are slow or Techniques identity management systems are based on flaky, or the team lacks the discipline to run centralized authorities and registries such DeepWalk is an algorithm that helps the full test suite locally. In this situation a red as corporate directory services, certificate apply machine learning on graphs. When build can block multiple devs or pairs of devs. authorities or domain name registries. working on data sets that are represented Instead of fixing the underlying root cause — The foundation of BitCoin as graphs, one of the key problems is slow builds, the inability to run tests locally — distributed ledger The development of decentralized identifiers to extract features from the graph. This or monolithic architectures that necessitate technology (DLT) — is — globally unique, persistent and self- is where DeepWalk can help. It uses many people working in the same area enabling the emergence of sovereign identifiers that are cryptographically SkipGram to construct node embeddings — teams usually rely on feature branches decentralized identities. verifiable — is a major enabling standard. by viewing the graph as a language to bypass these issues. We discourage

Although scaled implementations of where each node is a unique word in feature branches, given they may require (Decentralized identity) decentralized identifiers in the wild are the language and random walks of finite significant effort to resolve merge conflicts, still rare, we’re excited by the premise of length on the graph constitutes a sentence. and they introduce longer feedback loops this movement and have started using the These embeddings can then be used by and potential bugs during conflict resolution. concept in our architecture. For the latest various ML models. DeepWalk is one of Instead, we propose using preflight builds as When working on data experiments and industry collaborations, the techniques we’re trialling on some of an alternative: these are pull request–based sets that are represented check out Decentralized Identity Foundation. our projects where we’ve needed to apply builds for “micro branches” that live only for as graphs, one of the key machine learning on graphs. the duration of the pipeline run, with the problems is to extract branch opened for every commit. To help features from the graph. Declarative data pipeline automate this workflow, we’ve come across This is where DeepWalk definition Managing stateful systems via bots such as Bors, which automates merging can help. to master and branch deletion in case the Assess container orchestration mini branch build succeeds. We’re assessing Assess (DeepWalk) Many data pipelines are defined in a large, this flow, and you should too; but don’t use more or less imperative script written in We recommend caution in managing this to solve the wrong problem, as it can Python or Scala. The script contains the stateful systems via container orchestration lead to misuse of branches and may cause logic of the individual steps as well as the platforms such as Kubernetes. Some more harm than benefit. code chaining the steps together. When are not built with native support faced with a similar situation in Selenium for orchestration — they don’t expect a tests, developers discovered the Page scheduler to kill and relocate them to a Cloud lift and shift Object pattern, and later many behavior- different host. Building a highly available Hold driven development (BDD) frameworks service on top of such databases is not implemented a split between step trivial, and we still recommend running It is rather curious, that after over a decade definitions and their composition. Some them on bare metal hosts or a virtual of industry experience with cloud migration, teams are now experimenting with bringing machine (VM) rather than to force-fit them we still feel it’s necessary to call out cloud the same thinking to data engineering. into a container orchestration platform. lift and shift, a practice that views the cloud A separate declarative data pipeline simply as a hosting solution, resulting in definition, maybe written in YAML, contains the replication of an existing architecture, only the declaration and sequence of steps. Preflight builds security practices and IT operational models It states input and output data sets but Assess in the cloud. This fails to realize the cloud’s refers to scripts if and when more complex promises of agility and digital innovation. A logic is needed. With A La Mode, we’re Even though we strongly advocate in favor cloud migration requires intentional change seeing the first open source tool appear in of CI rather than Gitflow, we know that across multiple axes toward a cloud-native this space. committing straight to the trunk and running state, and depending on the unique migration

12 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. circumstances, each organization might Standish Group report) and business Long-lived branches with Gitflow Techniques end up somewhere on the spectrum from processes that have evolved over time. Hold cloud lift and shift to cloud native. Systems Replacing these features is a waste. Our architecture, for example, is one of the advice: Convince your customers to take a Five years ago we highlighted the problems pillars of delivery agility and often requires step back and understand what their users with long-lived branches with Gitflow. Logs intended for change. The temptation to simply lift and shift currently need and prioritize these needs Essentially, long-lived branches are the technical observability existing systems as containers to the cloud against business outcomes and metrics opposite of continuously integrating all are often inadequate can be strong. While this tactic can speed up — which often is easier said than done. changes to the source code, and in our to infer deep customer cloud migration, it falls short when it comes This means conducting user research and experience continuous integration is the understanding. to creating agility and delivering features applying modern product development better approach for most kinds of software and value. Enterprise security in the cloud practices rather than simply replacing the development. Later we extended our caution is fundamentally different from traditional existing ones. (Log aggregation for business to Gitflow itself, because we saw teams using perimeter-based security through firewalls analytics) it almost exclusively with long-lived branches. and zoning, and it demands a journey toward Today, we still see teams in settings where zero trust architecture. The IT operating model Log aggregation for business continuous delivery of web-based systems too has to be reformed to safely provide Snapshot testing is analytics is the stated goal being drawn to long-lived cloud services through self-serve automated Hold branches. So we were delighted that the undeniably useful when platforms and empower teams to take more author of Gitflow has now added a note to his working with legacy of the operational responsibility and gain Several years ago, a new generation of log original article, explaining that Gitflow was not systems. But it shouldn’t autonomy. Last but not least, organizations aggregation platforms emerged that were intended for such use cases. be the primary test must build a foundation to enable continuous capable of storing and searching over vast mechanism for such change, such as creating pipelines with amounts of log data to uncover trends and systems. continuous testing of applications and insights in operational data. Splunk was Snapshot testing only infrastructure as a part of the migration. the most prominent but by no means the Hold These will help the migration process, result in only example of these tools. Because these (Snapshot testing only) a more robust and well-factored system and platforms provide broad operational and The value of snapshot testing is undeniable give organizations a way to continue to evolve security visibility across the entire estate of when working with legacy systems by and improve their systems. applications, administrators and developers ensuring that the system continues to work have grown increasingly dependent on them. and the legacy code doesn’t break. However, This enthusiasm spread as stakeholders we’re seeing the common, rather harmful Legacy migration feature parity discovered that they could use log aggregation practice of using snapshot testing only as Hold for business analytics. However, business the primary test mechanism. Snapshot tests needs can quickly outstrip the flexibility and validate the exact result generated in the We find that more and more organizations usability of these tools. Logs intended for DOM by a component, not the component’s need to replace aging legacy systems to keep technical observability are often inadequate behavior; therefore, it can be weak and up with the demands of their customers to infer deep customer understanding. unreliable, fostering the “only delete the (both internal and external). One antipattern We prefer either to use tools and metrics snapshot and regenerate it” bad practice. we keep seeing is legacy migration feature designed for customer analytics or to take a Instead, you should test the logic and parity, the desire to retain feature parity more event-driven approach to observability behavior of the components emulating what with the old. We see this as a huge missed where both business and operational events users would do. This mindset is encouraged opportunity. Often the old systems have are collected and stored in a way they can be by tools in the Testing Library family. bloated over time, with many features replayed and processed by more purpose- unused by users (50% according to a 2014 built tools.

TECHNOLOGY RADAR | 13 © ThoughtWorks, Inc. All Rights Reserved. TECHNOLOGY RADAR Vol. 22 Platforms 27

26

22 68 67 21 69 70 25

20 52 54 71 15 16 72 13 53 55 24 19 14 58 12 57 73 56 59 18 11 74 10 6 60 62 61 23 5 9 63 75

17 50 8 64 3 4 51 65 76 1 7 2 66

Hold Assess Trial Adopt Adopt AdoptTrial Assess Hold 28. .NET Core 28 89 Platforms 39 30 79 29. Istio 97 88 78 87 Trial .NET Core 31 30. Anka Adopt 40 77 31. Argo CD 32 29 96 41 86 32. Crowdin We previously had .NET Core in Adopt, 33 33. eBPF indicating that it had become our default 34 85 34. Firebase 83 for .NET projects. But we felt it’s worth again 84 35. Hot Chocolate95 calling attention to .NET Core. With the 42 35 36. Hydra release of .NET Core 3.x last year, the bulk 81 43 37 37. OpenTelemetry94 of the features from .NET Framework have 36 80 82 38. Snowflake now been ported into .NET Core. With the 38 announcement that .NET Framework is on 44 Assess its last release, have reinforced 39.93 Anthos 92 the view that .NET Core is the future of 49 45 40. Apache Pulsar98 .NET. Microsoft has done a lot of work to 41. Cosmos 46 47 48 91 make .NET Core container friendly. Most of 42. Google BigQuery ML our .NET Core–based projects target 90 43. JupyterLab and are often deployed as containers. The 44. Marquez upcoming .NET 5 release looks promising, 45. Matomo and we’re looking forward to it. 46. MeiliSearch 47. Stratos 48. Trillian Istio Adopt Hold 49. Node overload If you’re building and operating a scaled microservices architecture and have embraced Kubernetes, adopting resiliency. Its user experience has been Anka service mesh to manage all cross-cutting improved in its latest releases, because Trial aspects of running the architecture of its ease of installation and control is a default position. Among various panel architecture. Istio has lowered Anka is a set of tools to create, manage, implementations of service mesh, Istio the bar for implementing large-scale distribute, build and test macOS has gained majority adoption. It has microservices with operational quality for reproducible virtual environments for iOS a rich feature set, including service many of our clients, while admitting that and macOS. It brings Docker-like experience discovery, traffic management, service- operating your own Istio and Kubernetes to macOS environments: instant start, CLI to-service and origin-to-service security, instances requires adequate knowledge to manage virtual machines and registry observability (including telemetry and and internal resources which is not for the to version and tag virtual machines for distributed tracing), rolling releases and fainthearted. distribution. We’ve used Anka to build a

15 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. macOS private cloud for a client. This tool is the platform streamlines the text that that’s supported by Google’s underlying Platforms worth considering when virtualizing iOS and needs translation into an online workflow. scalable infrastructure. We particularly like macOS environments. We like that Crowdin nudges the teams to Firebase App Distribution, which makes continuously and incrementally incorporate it easy to publish test versions of an app translations rather than managing them in via a CD pipeline, and Firebase Remote Although this technology Argo CD large batches toward the end. Config, which allows configuration changes is not new, it is now Trial to be dynamically pushed to apps without coming into its own with needing to republish them. the increasing use of Without making a judgment of the GitOps eBPF microservices deployed as technique, we’d like to talk about Argo Trial orchestrated containers. CD within the scope of deploying and Hot Chocolate monitoring applications in Kubernetes For several years now, the Linux kernel Trial environments. Based on its ability to has included the extended Berkeley (eBPF) automate the deployment of the desired Packet Filter (eBPF) virtual machine and The GraphQL ecosystem and community application state in the specified target provided the ability to attach eBPF filters keep growing. Hot Chocolate is a GraphQL environments in Kubernetes and our good to particular sockets. But extended BPF server for .NET (Core and Classic). It lets The clear separation of experience with troubleshooting failed goes far beyond packet filtering and allows you build and host schemas and then serve identity from the rest of deployments, verifying logs and monitoring custom scripts to be triggered at various queries against them using the same base the OAuth2 framework deployment status, we recommend you give points within the kernel with very little components of GraphQL — data loader, makes it easier to Argo CD a try. You can even see graphically overhead. Although this technology isn’t resolver, schema, operations and types. The integrate Hydra with an what is going on in the cluster, how a new, it’s now coming into its own with the team behind Hot Chocolate has recently existing change is propagated and how pods are increasing use of microservices deployed as added schema stitching, which allows ecosystem. created and destroyed in real time. orchestrated containers. Service-to-service for a single entry point to query across communications can be complex in these multiple schemas aggregated from different systems, making it difficult to correlate locations. Despite the potential to misuse (Hydra) Crowdin latency or performance issues back to an this approach, our teams are happy with Trial API call. We’re now seeing tools released Hot Chocolate — it’s well documented, and with prewritten eBPF scripts for collecting we’re able to deliver value quickly to our Most of the projects with multilingual and visualizing packet traffic or reporting on clients. support start with development teams CPU utilization. With the rise of Kubernetes, building features in one language and we’re seeing a new generation of security managing the rest through offline enforcement and instrumentation based on Hydra translation via and spreadsheets. eBPF scripts that help tame the complexity Trial Although this simple setup works, things of a large microservices deployment. can quickly get out of hand. You may have Not everyone needs a self-hosted OAuth2 to keep answering the same questions solution, but if you do, have a look at for different language translators, sucking Firebase Hydra — a fully compliant open source the energy out of the collaboration Trial OAuth2 server and OpenID connect between translators, proofreaders and provider. Hydra has in-memory storage the development team. Crowdin is one Google’s Firebase has undergone significant support for development and a relational of a handful of platforms that help in evolution since we mentioned it as part (PostgreSQL) for production streamlining the localization workflow of of a serverless architecture in 2016. use cases. Hydra as such is stateless and your project. With Crowdin the development Firebase is a comprehensive platform for easy to scale horizontally in platforms team can continue building features, while building mobile and web apps in a way such as Kubernetes. Depending on your

TECHNOLOGY RADAR | 16 © ThoughtWorks, Inc. All Rights Reserved. performance requirement, you may have solution for many of our clients. It has providing a high-level management and to tune the number of database instances a superior architecture to scale storage, control plane on top of a set of open Platforms while scaling Hydra instances. And compute, and services to load, unload and source technologies such as GKE, Service because Hydra doesn’t provide any identity use data. It’s also very flexible: it supports Mesh and a Git-based Configuration management solutions out of the box, you storage of structured, semi-structured and Management. It enables running can integrate whatever flavor of identity unstructured data; provides a growing list portable workloads and other assets The OpenTelemetry management you have with Hydra through of connectors for different access patterns on different hosting environments, project includes a clean API. This clear separation of identity such as Spark for data science and SQL including Google Cloud and on-premises specification, libraries, from the rest of the OAuth2 framework for analytics; and runs on multiple cloud hardware. Although other cloud providers agents and other makes it easier to integrate Hydra with an providers. Our advice to many of our have comparative offerings, Anthos components needed to existing authentication ecosystem. clients is to use managed services for their intends to go beyond a hybrid cloud to capture telemetry from utility technology such as big data storage; a portable cloud enabler using open however, if the risk and regulations source components, but that is yet to services to better observe, OpenTelemetry prohibit the use of managed services, be seen. We’re seeing a rising interest manage and debug them. Trial then Snowflake is a good candidate for in Anthos. While Google’s approach in companies with large volumes of data and managed hybrid cloud environments (OpenTelemetry) OpenTelemetry is an open source heavy processing workloads. Although seems promising, it’s not a magic bullet observability project that merges we’ve been successful using Snowflake in and requires changes in both existing OpenTracing and OpenCensus. The our medium-sized engagements, we’ve cloud and on-premise assets. Our advice Anthos is Google’s answer OpenTelemetry project includes yet to experience Snowflake in large for clients considering Anthos is to make to enable hybrid and specification, libraries, agents, and other ecosystems where data need to be owned measured tradeoffs between selecting multicloud strategies. components needed to capture telemetry across segments of the organization. services from the Google Cloud ecosystem It enables you to run and other options, to maintain their right from services to better observe, manage portable workloads and debug them. It covers the three pillars level of neutrality and control. of observability — distributed tracing, Anthos on different hosting metrics and logging (currently in beta) — Assess environments including and its specification connects these three Apache Pulsar Google Cloud and on- pieces through correlations; thus you We see a shift from accidental hybrid or Assess premise hardware. can use metrics to pinpoint a problem, whole-of-estate cloud migration plans locate the corresponding traces to to intentional and sophisticated hybrid, Apache Pulsar is an open source pub- (Anthos) discover where the problem occured, and poly or portable cloud strategies, where sub messaging/streaming platform, ultimately study the corresponding logs to organizations apply multidimensional competing in a similar space with Apache find the exact root cause. OpenTelemetry principles to establish and execute their Kafka. It provides expected functionality components can be connected to back-end cloud strategy: where to host their various — such as low-latency async and sync observability systems such as Prometheus data and functional assets based on message delivery and scalable persistent and Jaeger among others. Formation of risk, ability to control and performance storage of — as well as various OpenTracing is a positive step toward the profiles; how to utilize their on-premise client libraries. What has excited us to convergence of standardization and the infrastructure investments while reducing evaluate Pulsar is its ease of scalability, simplification of tooling. the cost of operations; and how to take particularly in large organizations with advantage of multiple cloud providers and multiple segments of users. Pulsar natively their unique differentiated services without supports multitenancy, georeplication, Snowflake creating complexity and friction for users role-based access control and segregation Trial building and operating applications. of billing. We’re also looking to Pulsar to solve the problem of a never-ending log of Snowflake has proven to be a robust Anthos is Google’s answer to enable messages for our large-scale data systems SaaS big data storage, warehouse or lake hybrid and multicloud strategies by where events are expected to persist

17 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. indefinitely and subscribers are able to BigQuery ML inverts this by bringing the information about a data ecosystem. It Platforms start consuming messages retrospectively. model to the data. Google BigQuery is a represents a simple data model to capture This is supported through a tiered storage data warehouse designed to serve large- metadata such as lineage, upstream and model. Although Pulsar is a promising scale queries using SQL, for analytical use downstream data processing jobs and their platform for large organizations, there cases. Google BigQuery ML extends this status, and a flexible set of tags to capture BigQuery ML lowers is room for improvement. Its current function and its SQL interface to create, the attributes of data sets. It provides a the bar for using ML installation requires administering train and evaluate machine learning simple RESTful API to manage the metadata to make predictions ZooKeeper and BookKeeper among other models using its data sets; and eventually which eases the integration of Marquez to and recommendations, pieces of technology. We hope that with its run model predictions to create new other tool sets within the data ecosystem. particularly for quick growing adoption, users can soon count on BigQuery data sets. It supports a limited explorations. wider community support. set of models out of the box, such as linear We’ve used Marquez as a starting point regression for forecasting or binary and and easily extended it to fit our needs such multiclass regression for classification. It as enforcing security policies as well as (Google BigQuery ML) Cosmos also supports, with limited functionality, changes to its domain language. If you’re Assess importing previously trained TensorFlow looking for a small and simple tool to models. Although BigQuery ML and its SQL- bootstrap storage and visualization of your The performance of blockchain technology based approach lower the bar for using data-processing jobs and data sets, Marquez has been greatly improved since we initially machine learning to make predictions and is a good place to start. assessed this area in the Radar. However, recommendations, particularly for quick there’s still no single blockchain that could explorations, this comes with a difficult achieve “internet-level” throughput. As trade-off: compromising on other aspects of Matomo various blockchain platforms develop, we’re model training such as ethical bias testing, Assess seeing new data and value silos. That’s explainability and continuous delivery for why cross-chain tech has always been a machine learning. Matomo (formerly Piwik) is an open source key topic in the blockchain community: web analytics platform that provides you the future of blockchain may be a network with full control over your data. You can self- of independent parallel blockchains. This JupyterLab host Matomo and secure your web analytics is also the vision of Cosmos. Cosmos Assess data from third parties. Matomo also makes releases Tendermint and CosmosSDK to it easy to integrate web analytics data with let developers customize independent JupyterLab is the next-generation web- your in-house data platform and lets you blockchains. These parallel blockchains based user interface for Project Jupyter. build usage models that are tailored to your could exchange value through the Inter- If you’ve been using Jupyter Notebooks, needs. Blockchain Communication (IBC) protocol JupyterLab is worth a try; it gives you and Peg-Zones. Our teams have had great an interactive environment for Jupyter experiences with CosmosSDK, and the notebooks, code and data. We see it as an MeiliSearch IBC protocol is maturing. This architecture evolution of Jupyter Notebook: it provides a Assess could solve blockchain interoperability and better experience by extending its original scalability issues. capabilities of allowing code, visualization MeiliSearch is a fast, easy-to-use and easy- and documentation to exist in one place. to-deploy text search engine. Over the years Elasticsearch has become the popular Google BigQuery ML choice for scalable text searches. However, Assess Marquez if you don’t have the volume of data that Assess warrants a distributed solution but still Often training and predicting outcomes want to provide a fast typo-tolerant search from machine learning models require Marquez is a relatively young open source engine, then we recommend assessing code to take the data to the model. Google project for collecting and serving metadata MeiliSearch.

TECHNOLOGY RADAR | 18 © ThoughtWorks, Inc. All Rights Reserved. Stratos blockchain-based distributed ledgers. For popular, it was the first major framework to Assess enterprise environments, however, where embrace a nonblocking programming model Platforms the cost of CPU-heavy consensus protocols which made it very efficient for IO-heavy Ultraleap (previously Leap Motion) has is unwarranted, we recommend you give tasks. (We mentioned this in our write-up of been a leader in the XR space for some Trillian a try. Node.js in 2012.) Due to its single-threaded time, creating remarkable hand-tracking nature, Node.js was never a good choice MeiliSearch is a fast, easy- hardware that allows a user’s hands to for compute-heavy workloads, though, and to-use and easy-to-deploy make the leap into virtual reality. Stratos is Node overload now that capable nonblocking frameworks text search engine. It’s Ultraleap’s underlying haptics, sensors and Hold also exist on other platforms — some with ideal if you don’t have software platform, and it can use targeted elegant, modern APIs — performance is no the volume of data that ultrasound to create haptic feedback in mid- Technologies, especially wildly popular ones, longer a reason to choose Node.js. warrants a distributed air. A use case is responding to a driver’s have a tendency to be overused. What we’re solution but still want hand gesture to change the air conditioning seeing at the moment is Node overload, a in the car and providing haptic feedback tendency to use Node.js indiscriminately to provide a fast typo- as part of the interface. We’re excited to or for the wrong reasons. Among these, tolerant search engine. see this technology and what creative two stand out in our opinion. Firstly, we technologists might do to incorporate it into frequently hear that Node should be used (MeiliSearch) their use cases. so that all programming can be done in one programming language. Our view remains that polyglot programming is a Stratos is the underlying Trillian better approach, and this still goes both haptics, sensors and Assess ways. Secondly, we often hear teams cite software platform that performance as a reason to choose Node.js. powers XR pioneer Although there are myriads of more or less Trillian is a cryptographically verifiable, Ultraleap (previously Leap centralized data store. For trustless, sensible benchmarks, this perception is decentralized environments, you can use rooted in history. When Node.js became Motion).

(Stratos)

19 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. TECHNOLOGY RADAR Vol. 22 Tools Adopt 50. Cypress Tools 27 51. Figma 26 Trial 52. Dojo 22 68 53. DVC Cypress 67 Adopt 54. Experiment tracking tools 21 69 for machine learning 70 Cypress is still 25a favorite among our teams 55. Goss where developers manage end-to-end 56. Jaeger tests themselves, as part of 20a healthy test 52 57. k9s 54 71 pyramid, of course. We decided to call it out 15 58. kind again in this Radar because recent versions 16 72 59. mkcert 13 53 55 24 19 60. MURAL of Cypress have added support for Firefox, 14 58 and we strongly suggest testing on multiple12 57 61. Open Policy Agent (OPA) 73 62. Optimal Workshop browsers. The dominance of Chrome and 56 59 63. Phrase Chromium-based18 browsers has led to a 11 74 worrying trend of teams seemingly10 only 64. ScoutSuite 6 60 62 65. Visual regression testing testing with Chrome which can lead to nasty 61 23 surprises. 5 tools 9 63 75 66. Visual Studio Live Share 17 50 8 3 64 Figma 4 51 65 76 Assess 1 Adopt 7 2 66 67. Apache Superset 68. AsyncAPI Hold FigmaAssess has demonstratedTrial to be the go-to Adopt Adopt Trial Assess Hold 69. ConfigCat tool for collaborative design, not only for 70. Gitpod designers but for multidisciplinary teams 28 89 71. Gloo 39 30 too; it allows developers and other roles 79 72. Lens 97 to view and comment on designs through 88 73. Manifold 78 87 74. Sizzy the browser without 31the desktop version. Dojo Docker images. Several of our teams use 40 75. Snowpack Compared to its competitors (e.g., Invision Trial 77 Dojo to streamline developing, testing 32 29 96 or Sketch) which have you use more than and building code from local development 76. tfsec 41 86 one tool for versioning, collaborating33 and A few years ago, Docker — and containers in through production pipelines. 34 Hold design sharing, Figma puts together all of general — radically changed how we think85 83 these features in one tool that makes it 95 about packaging, deploying and running84 our easier for our teams42 to discover new ideas applications. But despite this improvement DVC together. Our teams find Figma very useful,35 81 43 37in production, developers still spend a lot of 94Trial especially in remote and distributed design 36 time setting up development80 environments82 work enablement and facilitation. In addition and38 regularly run into “but it works on my In 2018 we mentioned DVC in conjunction to its real-time design and collaboration 44 machine” style problems. Dojo aims to fix with the versioning data for reproducible capabilities, Figma also offers an API that this by creating standard development 93 analytics. Since then it has become a 92 helps to improve49 the DesignOps45 process. environments, versioned and released as favorite98 tool for managing experiments in 46 47 48 91

90

21 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. machine learning (ML) projects. Since it’s features, you may want to consider it when tool such as the AWS web console can be Tools based on Git, DVC is a familiar environment its features meet your needs, especially a useful addition. It allows us to explore for software developers to bring their since it comes as a small, self-contained all kinds of resources in an ad-hoc fashion engineering practices to ML practice. binary (rather than requiring a Ruby without having to remember every single Because it versions the code that processes environment). A common anti-pattern with obscure command. Using an interactive Jaeger is an open source data along with the data itself and tracks using tools such as Goss is double-entry tool to make manual modifications on distributed tracing system. stages in a pipeline, it helps bring order to bookkeeping, where each change in the the fly is still a questionable practice, We’ve used it successfully the modeling activities without interrupting actual infrastructure as code files requires a though. For Kubernetes we now have k9s, with Istio and Envoy on the analysts’ flow. corresponding change in the test assertions. which provides an interactive interface for Kubernetes, and like its UI. Such tests are maintenance heavy and basically everything that kubectl can do. because of the close correspondence And to boot, it’s not a but between code and test, failures mostly occur runs inside a terminal window, evoking (Jaeger) Experiment tracking tools for machine learning when an engineer updates one side and fond memories of Midnight Commander Trial forgets the other. And these tests rarely for some of us. catch genuine problems. k9s provides an interactive interface The day-to-day work of machine learning often boils down to a series kind for basically everything of experiments in selecting a modeling Jaeger Trial that kubectl can do. And approach and the network topology, Trial to boot, it’s not a web training data and optimizing or tweaking kind is a tool for running local Kubernetes application but runs the model. Data scientists must use Jaeger is an open source distributed clusters using Docker container nodes. inside a terminal window. experience and intuition to hypothesize tracing system. Similar to Zipkin, it’s been With kubetest integration, kind makes changes and then measure the impact inspired by the Google Dapper paper and it easy to do end-to-end testing on (k9s) those changes have on the overall complies with OpenTelemetry. We’ve used Kubernetes. We’ve used kind to create performance of the model. As this practice Jaeger successfully with Istio and Envoy on ephemeral Kubernetes clusters to test has matured, our teams have found an Kubernetes and like its UI. Jaeger exposes Kubernetes resources such as Operators increasing need for experiment tracking tracing metrics in the Prometheus format and Custom Resource Definitions (CRDs) in tools for machine learning. These tools so they can be made available to other our CI pipelines. help investigators keep track of the tools. However, a new generation of tools experiments and work through them such as Honeycomb integrates traces and methodically. Although no clear winner metrics into a single observability stream mkcert has emerged, tools such as MLflow and for simpler aggregate analysis. Jaeger Trial platforms such as Comet or Neptune have joined CNCF in 2017 and has recently been introduced rigor and repeatability into the elevated to CNCF’s highest level of maturity, mkcert is a convenient tool for creating entire machine learning workflow. indicating its widespread deployment into locally trusted development certificates. production systems. Using certificates from real certificate authorities (CAs) for local development can Goss be challenging if not impossible (for hosts Trial k9s such as example.test, localhost or 127.0.0.1). Trial In such situations self-signed certificates may We mentioned Goss, a tool for provisioning be your only option. mkcert lets you generate testing, in passing in previous Radars, for We continue to be ardent supporters of self-signed certificates and installs the local example, when describing the technique infrastructure as code, and we continue to CA in the system root store. For anything of TDD’ing containers. Although Goss isn’t believe that a robust monitoring solution other than local development and testing, we always an alternative to Serverspec, simply is a prerequisite for operating distributed strongly recommend using certificates from because it doesn’t offer the same amount of applications. Sometimes an interactive real CAs to avoid trust issues.

TECHNOLOGY RADAR | 22 © ThoughtWorks, Inc. All Rights Reserved. MURAL Optimal Workshop GCP and other cloud providers. It works Trial Trial by automatically aggregating configuration Tools data for an environment and applying rules MURAL describes itself as a “digital UX research demands data collection and to audit the environment. We’ve found this workspace for visual collaboration” and analysis to make better decisions about very useful across projects for doing point- allows teams to interact with a shared the products we need to build. Our teams in-time security assessments. OPA provides a uniform workspace based on a whiteboard/sticky find Optimal Workshop useful because it framework and language notes metaphor. Its features include makes it easy to validate prototypes and for declaring, enforcing voting, commenting, notes and “follow the configure tests for data collection and Visual regression testing tools and controlling policies presenter.” We particularly like the template thus make better decisions. Features such Trial for various components of feature that allows a facilitator to design as first-click, card sorting, or a heatmap a cloud-native solution. and then reuse guided sessions with a team. of user interaction help to both validate Since we first mentioned visual regression

Each of the major collaboration suites have prototypes and improve website navigation testing tools in 2014, the use of the (Open Policy Agent (OPA)) a tool in this space (for example, Google and information display. It’s an ideal tool technique has spread and the tools Jamboard and Microsoft Whiteboard) and for distributed teams since it allows them to landscape has evolved. BackstopJS remains these are worth investigating, but we’ve conduct remote research. an excellent choice with new features being found MURAL to be slick, effective and added regularly, including support for Our teams find Optimal flexible. running inside Docker containers. Loki was Workshop useful because Phrase featured in our previous Radar. Applitools, it makes it easy to validate Trial CrossBrowserTesting and Percy are SaaS prototypes and configure Open Policy Agent (OPA) solutions. Another notable mention is tests for data collection Trial As mentioned in our description of Crowdin, Resemble.js, an image diffing library. and thus make better you now have a choice of platforms to Although most teams use it indirectly as decisions. part of BackstopJS, some of our teams Open Policy Agent (OPA) has rapidly manage the translation of a product into have been using it to analyze and compare become a favorable component of many multiple languages instead of emailing large (Optimal Workshop) distributed cloud-native solutions that we spreadsheets. Our teams report positive images of web pages directly. In general, build for our clients. OPA provides a uniform experiences with Phrase, emphasizing our experience shows that visual regression framework and language for declaring, that it’s easy to use for all key user groups. tools are less useful in the early stages enforcing and controlling policies for various Translators use a convenient browser- when the interface goes through significant components of a cloud-native solution. It’s based UI. Managers can add new fields and changes, but they certainly prove their a great example of a tool that implements synchronize translations with other teams worth as the product matures and the security policy as code. We’ve had a smooth in the same UI. Developers can access interface stabilizes. experience using OPA in multiple scenarios, Phrase locally and from a build pipeline. A including deploying resources to K8s feature that deserves a specific mention is clusters, enforcing access control across the ability to apply versioning to translations Visual Studio Live Share services in a service mesh and fine-grained through tags, which makes it possible to Trial security controls as code for accessing compare the look of different translations application resources. A recent commercial inside the actual product. Visual Studio Live Share is a suite of offering, Styra’s Declarative Authorization extensions for Visual Studio Code and Visual Service (DAS), eases the adoption of OPA Studio. At a time when teams are searching for enterprises by adding a management ScoutSuite for good remote collaboration options, tool, or control plane, to OPA for K8s with Trial we want to call attention to the excellent a prebuilt policy library, impact analysis of tooling here. Live Share provides a good, the policies and logging capabilities. We look ScoutSuite is an expanded and updated low-latency remote-pairing experience, and forward to maturity and extension of OPA tool based on Scout2 (featured in the requires significantly less bandwidth than beyond operational services to (big) data- Radar in 2018) that provides security the brute-force approach of sharing your centric solutions. posture assessment across AWS, Azure, entire desktop. Importantly, developers can

23 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. work with their preferred configuration, specification, inspired by the OpenAPI Gloo Tools extensions and key mappings during a specification, describes and documents Assess pairing session. In addition to real-time event-driven APIs in a machine-readable collaboration for editing and debugging format. It’s protocol agnostic, so it can With the increasing adoption of Kubernetes code, Live Share allows voice calls and be used for APIs that work over many and service mesh, API gateways have been AsyncAPI is an open sharing terminals and servers. protocols, including MQTT, WebSockets, experiencing an existential crisis in cloud- source initiative to create and Kafka. We’re eager to see the ongoing native distributed systems. After all, many a much needed event- improvements of AsyncAPI and further of their capabilities (such as traffic control, driven and asynchronous Apache Superset maturity of its tooling ecosystem. security, routing and observability) are now API standardization and Assess provided by the cluster’s ingress controller development tooling. and mesh gateway. Gloo is a lightweight Apache Superset is a great business ConfigCat API gateway that embraces this change; intelligence (BI) tool for data exploration Assess it uses Envoy as its gateway technology, (AsyncAPI) and visualization to work with large data while providing added value such as a lake and data warehouse setups. It works, If you’re looking for a service to support cohesive view of the APIs to the external for example, with Presto, Amazon Athena dynamic feature toggles (and bear in mind users and applications. It also provides ConfigCat supports and Amazon Redshift and can be nicely that simple feature toggles work well too), an administrative interface for controlling simple feature toggles, integrated with enterprise authentication. check out ConfigCat. We’d describe it as “like Envoy gateways and runs and integrates user segmentation, and Moreover, you don’t have to be a data LaunchDarkly but cheaper and a bit less with multiple service mesh implementations A/B testing and has a engineer to use it; it’s meant to benefit fancy” and find that it does most of what such as Linkerd, Istio and AWS App Mesh. generous free tier for low- all engineers exploring data in their we need. ConfigCat supports simple feature While its open source implementation volume use cases or those everyday work. It’s worth pointing out that toggles, user segmentation, and A/B testing provides the basic capabilities expected just starting out. Apache Superset is currently undergoing and has a generous free tier for low-volume from an API gateway, its enterprise edition incubation at the Apache Software use cases or those just starting out. has a more mature set of security controls Foundation (ASF), meaning it’s not yet fully such as API key management or integration (ConfigCat) endorsed by ASF. with OPA. Gloo is a promising lightweight Gitpod API gateway that plays well with the Assess ecosystem of cloud-native technology and AsyncAPI architecture, while avoiding the API gateway Assess You can build most software following trap of enabling business logic to glue APIs a simple two-step process: check out a for the end user. Open standards are one of the foundational repository, and then run a single build pillars of building distributed systems. For script. The process of setting up a full coding example, the OpenAPI (formerly Swagger) environment can still be cumbersome, Lens specification, as an industry standard to though. Gitpod addresses this by providing Assess define RESTful APIs, has been instrumental cloud-based, “ready-to-code” environments to the success of distributed architectures for Github or GitLab repositories. It offers One of the strengths of Kubernetes is such as microservices. It has enabled a an IDE based on Visual Studio Code that its flexibility and range of configuration proliferation of tooling to support building, runs inside the web browser. By default, possibilities along with the API-driven, testing and monitoring RESTful APIs. these environments are launched on the programmable configuration mechanisms However, such standardizations have been Google Cloud Platform, although you can and command- visibility and control largely missing in distributed systems for also deploy on-premise solutions. We see using manifest files. However, that strength event-driven APIs. the immediate appeal, especially for open can also be a weakness: when deployments source software where this approach can are complex or when managing multiple AsyncAPI is an open source initiative lower the bar for casual contributors. clusters, it can be difficult to get a clear to create a much needed event-driven However, it remains to be seen how picture of the overall status through and asynchronous API standardization viable this approach will be in corporate command-line arguments and manifests and development tooling. The AsyncAPI environments. alone. Lens attempts to solve this problem

TECHNOLOGY RADAR | 24 © ThoughtWorks, Inc. All Rights Reserved. with an integrated environment for viewing Sizzy feedback cycle during development because the current state of the cluster and its Assess changes become available in the browser Tools workloads, visualizing cluster metrics almost immediately. For this magic to work, and changing configurations through Building web applications that look just as Snowpack transforms the dependencies an embedded text editor. Rather than a intended on a large number of devices and in node_modules into single JavaScript simple point-and-click interface, Lens brings screen sizes can be cumbersome. Sizzy is a files in a new web_modules directory, Lens is one of several together the tools an administrator would SaaS solution that shows many viewports from where they can be imported as an different approaches that run from the command line into a single, in a single browser window. The application ECMAScript module (ESM). For IE11 and are trying to tame the navigable interface. This tool is one of is rendered in all viewports simultaneously other browsers that don’t support ESM, a complexity of Kubernetes several approaches that are trying to tame and interactions with the application are workaround is available. Unfortunately, management. the complexity of Kubernetes management. because no browser today can import CSS also synched across the viewports. In our We’ve yet to see a clear winner in this space, from JavaScript, using CSS modules is not experience interacting with an application in (Lens) but Lens strikes an interesting balance this way can make it easier to spot potential straightforward. between a graphical UI and command-line– issues earlier, before a visual regression only tools. testing tool flags the issue in the build tfsec is a static analysis pipeline. We should mention, though, that tfsec some of our developers who tried Sizzy for Assess tool that helps to scan Manifold a while did, on balance, prefer to work with Terraform templates and Assess the tooling provided by Chrome. Security is everyone’s concern and capturing find any potential security risks early is always better than facing issues. Manifold is a model-agnostic visual problems later on. In the infrastructure as debugger for machine learning (ML). Model Snowpack code space, where Terraform is an obvious (tfsec) developers spend a significant amount of Assess choice to manage cloud environments, time on iterating and improving an existing we now also have tfsec, which is a static model rather than creating a new one. Snowpack is an interesting new entrant analysis tool that helps to scan Terraform By shifting the focus from model space in the field of JavaScript build tools. The templates and find any potential security to data space, Manifold supplements key improvement over other solutions is issues. It comes with preset rules for the existing performance metrics with a that Snowpack makes it possible to build different cloud providers including AWS visual characteristics of the data set that applications with modern frameworks such and Azure. We always like tools that help influences the model performance. We think as React.js, Vue.js, and Angular without to mitigate security risks, and tfsec not only Manifold will be a useful tool to assess in the the need for a bundler. Cutting out the excels in identifying security risks, it’s also ML ecosystem. bundling step dramatically improves the easy to install and use.

25 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. TECHNOLOGY RADAR Vol. 22 Languages & Frameworks 27

26

22 68 67 21 69 70 25

20 52 54 71 15 16 72 13 53 55 24 19 14 58 12 57 73 56 59 18 11 74 10 6 60 62 61 23 5 9 63 75

17 50 8 64 3 4 51 65 76 1 7 2 66

Hold Assess Trial Adopt Adopt Trial Assess Hold Adopt 77. React Hooks 78. React Testing Library 28 89 Languages39 30 & 79 79. Vue.js 97 87 88 31 78 Trial 80. CSS-in-JS Frameworks40 77 32 29 96 81. Exposed 41 86 82. GraphQL Inspector 33 83. Karate React Hooks 34 85 83 84. Koin Adopt 84 95 85. NestJS 42 35 81 86. PyTorch React Hooks have introduced a 43new approach 37 94 87. Rust to managing stateful logic; given React 36 80 82 88. Sarama 38 components have always been closer to 89. SwiftUI functions than classes, Hooks have44 embraced this and brought state to the functions, 93 92 Assess instead of taking function49 as methods to45 the 98 90. Clinic.js Bubbleprof state with classes. Based on our experience, 46 47 48 91 91. Deequ Hooks improve reuse of functionality among 92. ERNIE components and code readability. Given 90 93. MediaPipe Hooks’ testability improvements, using React 94. Tailwind CSS Test Renderer and React Testing Library, 95. Tamer and their growing community support, we 96. Wire consider them our approach of choice. 97. XState

Hold React Testing Library 98. Enzyme Adopt Library which React Testing Library is part segregation of directives and components The JavaScript world moves pretty fast, of and which provides a whole family of (one file per component idiom) and its and as we gain more experience using a libraries for Angular and Vue.js, for example. simpler state management have made it a framework our recommendations change. compelling option among others. The React Testing Library is a good example of a framework that with deeper usage has Vue.js eclipsed the alternatives to become the Adopt CSS-in-JS sensible default when testing React-based Trial frontends. Our teams like the fact that tests Vue.js has become one of the successfully written with this framework are less brittle applied, loved and trusted frontend Since we first mentioned CSS-in-JS as than with alternative frameworks such as JavaScript frameworks among our an emerging technique in 2017, it has Enzyme, because you’re encouraged to community. Although there are other, well- become much more popular, a trend we test component relationships individually adopted alternatives, such as React.js, the also see in our work. With some solid as opposed to testing all implementation simplicity of Vue.js in API design, its clear production experience under our belts, details. This mindset is brought by Testing

27 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. we can now recommend CSS-in-JS as a We’ve cautioned against the use of GraphQL constructors or by mimicking Kotlin’s lazy Languages & technique to trial. A good starting point is in the past, and we’re happy to see some initialization so that objects are injected the styled components framework, which improvements in tooling around GraphQL only when needed. This is in contrast to Frameworks we mentioned in our previous Radar. Next since. Most of our teams continue to the statically compiled Dagger injection to all the positives, though, there usually use GraphQL for server-side resource framework for Android. Our developers like is a downside when using CSS-in-JS: the aggregation, and by integrating GraphQL the lightweight nature of this framework calculation of styles at runtime can cause Inspector in their CI pipelines, we’ve been and its built-in testability. Koin is a Kotlin framework a noticeable lag for end users. With Linaria able to catch potential breaking changes in that handles one of the we’re now seeing a new class of frameworks the GraphQL schema. that were created with this issue in mind. NestJS routine problems in Linaria employs a number of techniques Trial software development, to shift most of the performance overhead Karate dependency injection. to build time. Alas, this does come with its Trial The growth in popularity of Node.js own set of trade-offs, most notably a lack of and trends such as Node overload have (Koin) dynamic style support in IE11. Given our experience that tests are the only led to the application of Node.js for API specifications that really matter, we’re developing business applications. We always on the lookout for new tools that often see problems, such as scalability and NestJS is a TypeScript-first Exposed might help with testing. Karate is an API maintainability, with large JavaScript-based framework that makes the Trial testing framework whose unique feature applications. NestJS is a TypeScript-first development of NodeJS is that tests are written directly in Gherkin framework that makes the development applications safer and less Through their extended use of Kotlin, our without relying on a general-purpose of Node.js applications safer and less error development teams have gained experience programming language to implement test prone. NestJS is opinionated and comes with error prone. with more frameworks designed specifically behavior. It’s a domain-specific language for SOLID principles and an Angular-inspired for Kotlin rather than using Java frameworks describing HTTP-based API tests. Our teams architecture out of the box. When building (NestJS) with Kotlin. Although it’s been around for a like the readable specification that they get Node.js microservices, NestJS is one of the while, Exposed has caught our attention as a with this tool and recommend to keep tests frameworks that our teams commonly lightweight object-relational mapper (ORM). with Karate in the upper levels of the testing use to empower developers to create Exposed has two flavors of database access: pyramid and not overload its use by making testable, scalable, loosely coupled and easily a typesafe internal DSL wrapping SQL and very detailed assertions. maintainable applications. an implementation of the data access object (DAO) pattern. It supports features expected from a mature ORM such as handling of Koin PyTorch many-to-many references, eager loading, Trial Trial and support for joins across entities. We also like that the implementation works As Kotlin is used increasingly for both Our teams have continued to use and without proxies and doesn’t rely on mobile and server-side development, the appreciate the PyTorch machine learning reflection, which is certainly beneficial to associated ecosystem continues to evolve. framework, and several teams prefer performance. Koin is a Kotlin framework that handles PyTorch over TensorFlow. PyTorch exposes one of the routine problems in software the inner workings of ML that TensorFlow development: dependency injection. hides, making it easier to debug, and GraphQL Inspector Although you can choose from a variety contains constructs that Trial of dependency injection frameworks for are familiar with such as loops and Kotlin, our teams have come to prefer actions. Recent releases have improved GraphQL Inspector lets you compare the simplicity of Koin. Koin avoids using performance of PyTorch, and we’ve been changes between two GraphQL schemas. annotations and injects either through using it successfully in production projects.

TECHNOLOGY RADAR | 28 © ThoughtWorks, Inc. All Rights Reserved. Rust for a much better developer experience. tools in this space. They settled on Deequ, Trial The SwiftUI framework also draws a library for writing tests that resemble Languages & inspiration from the React.js world that unit tests for data sets. Deequ is built on Rust is continuously gaining in popularity. has dominated web development in recent top of Apache Spark, and even though it’s Frameworks We’ve had heated discussions about which years. Immutable values in view models and published by AWS Labs it can be used in is better, Rust or C++/Go, without a clear an asynchronous update mechanism make environments other than AWS. winner. However, we’re glad to see Rust for a unified reactive programming model. has improved significantly, with more This gives developers an entirely native Deequ is a library for built-in APIs being added and stabilized, alternative to similar reactive frameworks ERNIE writing tests that resemble such as React Native or Flutter. SwiftUI including advanced async support, since Assess unit tests for data sets we mentioned it in our previous Radar. In definitely represents the future of Apple addition, Rust has also inspired the design UI development, and although new, it has In the previous edition of the Radar we which can help when of new languages. For example, the Move shown its benefits. We’ve been having great had BERT — which is a key milestone in the automating data quality language on Libra borrows Rust’s way of experience with it — and its shallow learning NLP landscape. Last year, released checks between different managing memory to manage resources, curve. It’s worth noting that you should ERNIE 2.0 (Enhanced Representation steps in a data pipeline. ensuring that digital assets can never be know your customer’s use case before through kNowledge IntEgration) which copied or implicitly discarded. jumping into using SwiftUI, given that it outperformed BERT on seven GLUE (Deequ) doesn’t support iOS 12 or below. language understanding tasks and on all nine of the Chinese NLP tasks. ERNIE, like Sarama BERT, provides unsupervised pretrained ERNIE provides Trial Clinic.js Bubbleprof language models, which can be fine- unsupervised pretrained Assess tuned by adding output layers to create language models, which Sarama is a Go client library for Apache state-of-the-art models for a variety of Kafka. If you’re developing your APIs in Go, With the aim of improving performance NLP tasks. ERNIE differs from traditional can be fine-tuned by you’ll find Sarama quite easy to set up and in our code, profiling tools are very useful pretraining methods in that it is a continual adding output layers to manage as it doesn’t depend on any native to identify bottlenecks or delays in code pretraining framework. Instead of training create state-of-the-art libraries. Sarama has two types of APIs — which are hard to identify, especially with a small number of pretraining models for a variety of a high-level API for easily producing and in asynchronous operations. Clinic.js objectives, it could constantly introduce NLP tasks. consuming messages and a low-level API for Bubbleprof represents visually the async a large variety of pretraining tasks to controlling bytes on the wire. operations in Node.js processes, drawing a help the model efficiently learn language (ERNIE) map of delays in the application’s flow. We representations. We’re pretty excited about like this tool because it helps developers to the advancements in NLP and are looking SwiftUI easily identify and prioritize what to improve forward to experimenting with ERNIE on our Trial in the code. projects.

Apple has taken a big step forward with their new SwiftUI framework for Deequ MediaPipe implementing user interfaces on the macOS Assess Assess and iOS platforms. We like that SwiftUI moves beyond the somewhat kludgy There are still some tool gaps when applying MediaPipe is a framework for building relationship between Interface Builder and good software engineering practices in MultiModal (such as video, audio, time Xcode and adopts a coherent, declarative data engineering. Attempting to automate series data, etc.), cross-platform (for and code-centric approach. You can now data quality checks between different example, Android, iOS, Web, and edge view your code and the resulting visual steps in a data pipeline, one of our teams devices) and applied ML pipelines. It interface side by side in Xcode 11, making was surprised when they found only a few provides multiple capabilities, including face

29 | TECHNOLOGY RADAR © ThoughtWorks, Inc. All Rights Reserved. detection, hand tracking, gesture detection JDBC source connector for Kafka.” Despite visualizing them as state charts. It integrates Languages & and object detection. Although MediaPipe being a relatively new framework, we’ve with the more popular reactive JavaScript is primarily deployed to mobile devices, it’s found Tamer to be more efficient than the frameworks (Vue.js, Ember.js, React.js and Frameworks started to show up in the browser thanks to Kafka JDBC connector, especially when huge RxJS) and is based on the W3C standard WebAssembly and XNNPack ML Inference amounts of data are involved. for finite state machines. Another notable Library. We’re exploring MediaPipe for some feature is the serialization of machine AR use cases and like what we see so far. definitions. One thing that we’ve found Tailwind CSS proposes Wire helpful when creating finite state machines an interesting approach Assess in other contexts (particularly when writing by providing lower-level Tailwind CSS game logic) is the ability to visualize states and their possible transitions; we like the utility CSS classes to Assess The Golang community has had its fair share of dependency injection skeptics, fact that it’s really easy to do this with create building blocks CSS tools and frameworks offer predesigned partly because they confused the pattern XState’s visualizer. without opinionated components for fast results; after a with specific frameworks, and developers styles and aiming for easy while, however, they can complicate with a system-programming background customization. customization. Tailwind CSS proposes an naturally dislike runtime overhead caused Enzyme interesting approach by providing lower- by reflection. Then along came Wire, a Hold (Tailwind CSS) level utility CSS classes to create building compile-time dependency injection tool that blocks without opinionated styles and can generate code and wire components We don’t always move deprecated tools aiming for easy customization. The breadth together. Wire has no additional runtime to Hold in the Radar, but our teams feel Wire is a compile-time of the low-level utilities allows you to overhead, and the static dependency graph strongly that Enzyme has been replaced for dependency injection tool avoid writing any classes or CSS on your is easier to reason about. Whether you unit testing React UI components by React own which leads to a more maintainable handwrite your code or use frameworks, we Testing Library. Teams using Enzyme have that can both generate codebase in the long term. It seems that recommend using dependency injection to found that its focus on testing component code and wire components Tailwind CSS offers the right balance encourage modular and testable designs. internals leads to brittle, unmaintainable together. between reusability and customization to tests. create visual components. (Wire) XState Assess Tamer Assess We’ve featured several state management libraries in the Radar before, but XState If you need to ingest data from relational takes a slightly different approach. It’s a databases into a Kafka topic, consider simple JavaScript and TypeScript framework Tamer, which labels itself “a domesticated for creating finite state machines and

TECHNOLOGY RADAR | 30 © ThoughtWorks, Inc. All Rights Reserved. We are a software consultancy and Want to stay up-to-date with all community of passionate purpose-led Radar-related news and insights? individuals, 7000+ people strong across Follow us on your favorite social channel or 43 offices in 14 countries. Over our 25+ become a subscriber. year history, we have helped our clients solve complex business problems where technology is the differentiator. subscribe now When the only constant is change, we prepare you for the unpredictable. thoughtworks.com/radar #TWTechRadar