An Opinionated Guide to Technology Frontiers
Total Page:16
File Type:pdf, Size:1020Kb
TECHNOLOGY RADARVOL. 21 An opinionated guide to technology frontiers thoughtworks.com/radar #TWTechRadar Rebecca Martin Fowler Bharani Erik Evan Parsons (CTO) (Chief Scientist) Subramaniam Dörnenburg Bottcher Fausto Hao Ian James Jonny CONTRIBUTORS de la Torre Xu Cartwright Lewis LeRoy The Technology Radar is prepared by the ThoughtWorks Technology Advisory Board — This edition of the ThoughtWorks Technology Radar is based on a meeting of the Technology Advisory Board in San Francisco in October 2019 Ketan Lakshminarasimhan Marco Mike Neal Padegaonkar Sudarshan Valtas Mason Ford Ni Rachel Scott Shangqi Zhamak Wang Laycock Shaw Liu Dehghani TECHNOLOGY RADAR | 2 © ThoughtWorks, Inc. All Rights Reserved. ABOUT RADAR AT THE RADAR A GLANCE ThoughtWorkers are passionate about ADOPT technology. We build it, research it, test it, 1 open source it, write about it, and constantly We feel strongly that the aim to improve it — for everyone. Our industry should be adopting mission is to champion software excellence these items. We use them and revolutionize IT. We create and share when appropriate on our the ThoughtWorks Technology Radar in projects. HOLD ASSESS support of that mission. The ThoughtWorks TRIAL Technology Advisory Board, a group of senior technology leaders at ThoughtWorks, 2 TRIAL ADOPT creates the Radar. They meet regularly to ADOPT Worth pursuing. It’s 108 discuss the global technology strategy for important to understand how 96 ThoughtWorks and the technology trends TRIAL to build up this capability. ASSESS 1 that significantly impact our industry. Enterprises can try this HOLD 2 technology on a project that The Radar captures the output of the 3 can handle the risk. 4 Technology Advisory Board’s discussions in a format that provides value to a wide range of stakeholders, from developers to CTOs. The content is intended as a concise 3 ASSESS summary. Worth exploring with the goal of understanding We encourage you to explore these how it will affect your technologies. The Radar is graphical in enterprise. nature, grouping items into techniques, tools, platforms and languages & frameworks. When Radar items could appear 4 HOLD in multiple quadrants, we chose the one that seemed most appropriate. We further Proceed with caution. group these items in four rings to reflect our current position on them. NEW OR CHANGED Items that are new or have had significant changes since the For more background on the Radar, see last Radar are represented as triangles, while items that have thoughtworks.com/radar/faq. NO CHANGE not changed are represented as circles. Our Radar is forward looking. To make room for new items, we fade items that haven’t moved recently, which isn’t a reflection on their value but rather on our limited Radar real estate. TECHNOLOGY RADAR | 3 © ThoughtWorks, Inc. All Rights Reserved. WHAT’S NEW Highlighted themes in this edition Cloud: Is More Less? security (Security policy as code) and the output of these models. While these — other governance mechanisms (Run cost improvements in interpretability are a step as architecture fitness function) protect in the right direction, explaining deep neural As the major cloud providers have achieved the important but not urgent parts of networks remains an elusive goal. For that near parity on core functionality, the software projects. This topic concerning reason, data scientists are beginning to competitive focus has moved to the extra policy, compliance and governance as regard explainability as a first class-concern services they can provide, encouraging code reappeared multiple times in our when choosing a machine learning model. them to release new offerings at breakneck conversations. We see a natural evolution speed. In their haste to compete, compete, in the software development ecosystem they release new services with rough edges of increasing automation: continuous and incomplete features. The emphasis on Software Development integration with automated testing, speed and product proliferation, through as a Team Sport continuous delivery, infrastructure as code, either acquisition or hastily created services, — and now automated governance. often results not merely in bugs but also in Since the early days of our Technology Radar, Building automation around cloud cost, poor documentation, difficult automation we’ve warned against tools and techniques dependency management, architectural and incomplete integration with vendors’ that isolate members of software teams structure and other former manual own parts. This causes frustration for teams from one another, hampering feedback processes shows a natural evolution; we’re trying to deliver software using functionality and collaboration. Often, when new learning how we can automate all important promised by the cloud provider yet specializations come along, practitioners, aspects of software delivery. constantly hitting roadblocks. Companies vendors and tools insist that some part of choose cloud vendors for a variety of factors development must be done in an isolated and often at a high level in the organization. environment, away from the chaos of Our advice for teams: don’t assume that all Interpreting the Black Box “regular” development. We reject that of your designated cloud provider’s services of ML claim and constantly look for new ways to are of equal quality, test out key capabilities — reengage software development as a team and be open to alternative open-source Machine learning often appears to discover sport. Feedback is critical when developing options or a polycloud strategy, if your own solutions to problems that humans can’t, something as complex as software. While time-to-market trade-offs merit the using pattern matching, back propagation projects increasingly require specialization, operational overhead of managing them. and other well-known techniques. However, we strive to fit them into regular despite their power, many of these models collaboration and feedback. We particularly are inherently opaque, meaning that their dislike the “10x engineers” meme and prefer Protecting the Software results can’t be explained in terms of logical to focus on creating and enabling “10x Supply Chain inference. This is a problem when humans teams.” We see this currently playing out in — have a right to know how a decision was how design, data science and security can be made or when there is a risk of introducing integrated into cross-functional teams and Organizations should resist ivory tower prejudice, sampling, algorithmic or other supported with solid automation. The next governance rules that require lengthy bias into the model. We’re now seeing the frontier is bringing more governance and manual inspection and approval; rather, emergence of tools such as What-If and compliance activities into the fold. automated dependency protection techniques such as ethical bias testing that (Dependency drift fitness function), help us find the limitations and predict TECHNOLOGY RADAR | 4 © ThoughtWorks, Inc. All Rights Reserved. TECHNIQUES TOOLS ADOPT 1. Container security scanning ADOPT 2. Data integrity at the origin 51. Commitizen THE RADAR 3. Micro frontends 52. ESLint 4. Pipelines for infrastructure as code 53. React Styleguidist 5. Run cost as architecture fitness function 6. Testing using real device TRIAL 54. Bitrise 55. Dependabot TRIAL 7. Automated machine learning (AutoML) 56. Detekt 8. Binary attestation 57. Figma 9. Continuous delivery for machine learning 58. Jib (CD4ML) 59. Loki 10. Data discoverability 60. Trivy 28 11. Dependency drift fitness function 61. Twistlock 12. Design systems 62. Yocto Project 13. Experiment tracking tools for machine 64 ASSESS 27 learning 24 63. Aplas 63 65 14. Explainability as a first-class model 23 66 64. asdf-vm 67 selection criterion 22 77 15. Security policy as code 65. AWSume 66. dbt 68 16. Sidecars for endpoint security 67. Docker Notary 69 17. Zhong Tai 26 54 68. Facets 21 69. Falco 70 ASSESS 15 71 70. in-toto 17 56 57 18. BERT 16 55 19. Data mesh 71. Kubeflow 20 20. Ethical bias testing 72. MemGuard 14 72 73. Open Policy Agent (OPA) 12 13 21. Federated learning 11 58 73 74. Pumba 6 22. JAMstack 25 59 75. Skaffold 19 23. Privacy-preserving record linkage (PPRL) 10 5 74 76. What-If Tool 51 60 using Bloom filter 4 24. Semi-supervised learning loops 9 61 75 HOLD 18 8 3 77. Azure Data Factory for orchestration 52 HOLD 2 53 62 76 7 25. 10x engineers 1 26. Front-end integration via artifact HOLD ASSESS TRIAL ADOPT ADOPT TRIAL ASSESS HOLD 27. Lambda pinball 34 28. Legacy migration feature parity LANGUAGES & 85 35 29 95 FRAMEWORKS 84 ADOPT 36 83 94 PLATFORMS 37 TRIAL ADOPT 78. Arrow 38 30 93 79. Flutter 39 31 81 TRIAL 32 80. jest-when 79 82 29. Apache Flink 81. Micronaut 78 40 30. Apollo Auto 92 82. React Hooks 41 80 31. GCP Pub/Sub 83. React Testing Library 32. Mongoose OS 42 84. Styled components 43 91 33. ROS 90 85. Tensorflow 33 44 45 89 ASSESS ASSESS 96 34. AWS Cloud Development Kit 86. Fairseq 47 35. Azure DevOps 46 87. Flair 48 50 88 36. Azure Pipelines 88. Gatsby.js 49 37. Crowdin 89. GraphQL 87 86 38. Crux 90. KotlinTest 39. Delta Lake 91. NestJS 40. Fission 92. Paged.js 41. FoundationDB 93. Quarkus 42. GraalVM 94. SwiftUI New or changed 43. Hydra 95. Testcontainers No change 44. Kuma 45. MicroK8s HOLD 46. Oculus Quest 96. Enzyme 47. ONNX 48. Rootless containers 49. Snowflake 50. Teleport HOLD TECHNOLOGY RADAR | 5 © ThoughtWorks, Inc. All Rights Reserved. TECHNIQUES Container security scanning systems and teams are most intimately ADOPT 28 ADOPT familiar with their data and best positioned 1. Container security scanning to fix it at the source. Data mesh 2. Data integrity at the origin 64 architecture takes this one step further, 3. Micro frontends 27 The continued adoption of containers for 24 63 4. Pipelines for infrastructure as 23 65 66 deployments, especially Docker, has made comparing consumable data to a product, code 67 container security scanning a must-have where data quality and its objectives are 22 77 5.