From COBOL to Agentic AI: Transforming the Software Lifecycle

The Personal Story Behind the Transformation
When I arrived in Stockholm in the late 1990s, my role was JCL Coordinator working for the consulting firm American Management Systems (AMS) at the telecommunications operator Telia’s offices. At that time, JCL (Job Control Language), COBOL, and IMS were already veteran terms but still sexy in the tech industry. The process was linear, predictable, and above all, controlled. The technical designs were practically code on paper that we approved by signing them with a pen.
Today, those same terms evoke a bygone era of batch jobs and sequential files. Paradoxically, these terms still exist and are well-paid due to the scarcity of experts. Those of us who have lived through the continuous technological transition know that the current situation represents a deeper and accelerated change being the greatest transformation in how we conceive, build, and evolve software since the advent of the Internet.
The arrival of Agentic Artificial Intelligence is reconfiguring not only how we generate code but also how we define, test, deploy, maintain, and operate it. In my opinion, it is a more radical and complete change than the previous ones.
In this post, I want to share my perspective as a professional who has already seen many changes and reflect on what it means for technology leaders who must navigate it. A lot of topics for a post; it may require sequels.
Why Start This Post Talking About Mainframe
Talking about that era is not out of nostalgia but a way to appreciate the magnitude of the current change.
In the mainframe:
- In large projects, we programmed with text editors so that our managers could formally approve changes on paper before bringing them to code.
- The code ran on an expensive and shared machine.
- Errors had immediate and costly operational consequences.
- Exhaustive planning was a necessity, not an option.
- Knowledge was highly specialized and difficult to transfer.
With Agentic AI:
- We describe needs in natural language for the AI to interpret.
- The AI generates code that can run on elastic and economical infrastructure.
- Errors are detected and corrected in real-time, often autonomously.
- Rapid experimentation is the norm, not the exception.
- Knowledge is democratized through shared contexts with AI.
The Mainframe Era: When Code Was Signed on Paper
The Paradigm of Controlled Rigidity
In my mainframe days, the software lifecycle operated under fundamentally different principles than today:
Characteristics of mainframe development:
- The lifecycle was linear and rigid: requirements → development → testing → production.
- Projects were measured in years or, at best, months. Never in weeks.
- We coded business logic in COBOL, with sequential files or IMS databases.
- Batch execution control was managed through JCL, specifying exactly which program to run, which devices to use, and which files to read and write.
- Version control in environments was rudimentary and poorly automated. Changes were planned months in advance.
- At that time, machine time was a costly and limited resource, so software had to be meticulously planned and optimized. An error in definition, planning, or execution could be catastrophic for the project, requiring a complete restart of the cycle.
Some of these principles have persisted, although technological advances allowed for faster and more flexible lifecycles.
The Evolution: From Waterfall to Agile/DevOps
The 90s-2000s: The Rebellion Against Rigidity
The Waterfall model dominated development for decades. It was slow and rigid but seemingly efficient. It could not adapt to change, so we had to control it through exhaustive planning.
But that was not agile enough to deliver quality software that customers truly wanted when they needed it.
The Agile methodology was a response to this weakness. Its proponents realized that they could never completely eliminate risk or uncertainty from a project. Instead, they focused on containing risk through rapid development and constant validation.
Some innovations from the Agile era (1990-2010):
- Rapid Application Development (RAD) - James Martin at IBM, 1991: emphasis on process and prototyping over excessive planning.
- Scrum (1995): iterative frameworks with sprints, backlogs, and self-organizing teams.
- Extreme Programming (XP) - Kent Beck, 1996: taking best practices to the extreme (test-driven development, pair programming).
- Agile Manifesto (2001): 17 developers in Snowbird, Utah, consolidated principles that would transform the industry.
The 2010s: The Era of Cloud and DevOps
The 2010s marked another turning point. Companies began to prioritize product management, customer insights, and user experience (UX) over simply producing “features.”
DevOps emerged closing the gap between development and operations teams. Instead of writing code and “throwing it over” to IT administrators, engineers were now responsible for their deployments, monitoring, and system reliability.
Companies like Netflix and Spotify became famous for their microservices architectures, allowing teams to work on independent services that scaled efficiently.
Key advances from the DevOps era:
- Continuous Integration / Continuous Deployment (CI/CD)
- Infrastructure as Code (IaC)
- Containers and orchestration (Docker, Kubernetes)
- Real-time monitoring and observability
- End-to-end ownership culture (“You build it, you run it”)
Although the Mainframe was slow to adopt DevOps, some organizations demonstrated that it was possible in that direction. For example, Broadcom’s Mainframe Software Division implemented full Agile practices, using modern tools like Jenkins, GitHub Actions, and Zowe CLI to automate CI/CD on mainframes.
Even the most traditional environment can transform with the right mindset and tools.

The New Era: AI-Native Engineering
What is AI-Native Engineering
AI-Native Engineering is not simply using AI as a tool. It is an approach that places AI at the center of the entire software development lifecycle, from conception to operation.
As Gartner highlights, this transformation requires:
- Investment in talent and upskilling of existing employees.
- Prioritization of diverse data sources to inform product decisions.
- Realignment of business planning around data-driven metrics based on outcomes.
- Investment in AI-enhanced tools and integrated platforms.
But most importantly: it requires a fundamental change in ways of working and roles. Neither Agile nor DevOps would have allowed for improvements in quality and speed of products without incorporating new roles and operating models. This fundamental change is the greatest challenge we face at this moment to transform the technology industry.
The Economic Context: $2.6-4.4 Trillion at Stake
We are now entering, if we haven’t already, a completely new phase. This time, the speed of change is unprecedented.
McKinsey estimates that: “generative AI could add between $2.6 trillion and $4.4 trillion to the global economy, fundamentally transforming how software products are conceived, developed, and brought to market. This is not an incremental change in efficiency — it represents a fundamental reimagining of the entire product development process.”
In this context, in addition to the usual suspects —Google, Amazon, Microsoft, etc.— the major players in AI —OpenAI, Anthropic, etc.— join a series of startups that are maximizing the potential of LLMs to transform the technology industry.
Among these startups are companies like Cognition, which recently acquired Codeium to obtain its flagship product Windsurf. With this acquisition, Cognition becomes the leader with what they announce as the first agentic IDE (Windsurf) in addition to their own agent (Devin). To give an idea of the magnitude of this acquisition, OpenAI made an offer for Codeium of $3 billion, and ultimately Cognition acquired it in a deal valued at $2.4 billion.
Another notable agentic IDE is Cursor. The company was founded in 2023 by 3 MIT alumni and is currently valued at nearly $10 billion and is the trendy agentic IDE among software teams at tech companies like OpenAI, Stripe, Shopify, or Spotify, among many others.
In the world of AI agents specialized in software engineering, in addition to Devin, we have others like Blitzy, which with total autonomy try to solve development needs that previously required humans.
Platforms that, while not agentic, enable agility in this world also play a prominent role. These products go beyond development and facilitate infrastructure deployments by non-experts. I am thinking of products like Vercel, Railway, Netlify, Supabase, etc. With these products, they enable agents to not only perform development but also deploy and maintain them.
Goldman is piloting Devin AI (July 2025), becoming the first major bank to use an autonomous software engineer. Santander has quickly followed that same path.
But not only banks are following this path; I leave you a recent video of Ferrovial’s agreement to employ Windsurf.
Four Fundamental Concepts
To navigate this new era, we need to understand three interrelated but distinct concepts:
1. Vibecoding: Conversation as Interface
Vibecoding describes a reality in which a developer or domain expert describes in natural language what they need, and the AI implements it. A system with Generative AI receives that description and transforms it into code, component, module, or even a complete application running.
The work is no longer about writing code but about clearly describing what you want, discussing possible solution alternatives, and validating the result or iterating.
Iterations are no longer weeks but hours or minutes. The temporal dimension of development is drastically reduced.
2. Agentic AI: Purposeful Autonomy
Agentic AI in software development refers to AI systems that act as specialized and autonomous experts. They can be analysts, architects, developers, testers, DevOps, and any other traditional role.
These AI agents receive a high-level goal, plan their own approach, develop, test, and refactor code, launch deployments, and learn and improve with feedback.
Gartner predicts that by 2027, this agentic technology will be embedded in engineering workflows, and that 80% of software engineers will need to reskill to adapt to roles created when generative AI takes on broader programming functions.
Autonomy is still limited but is growing exponentially. Every month we see new advances in benchmarks like SWE-bench that measure the ability of AI agents to solve real issues on GitHub.
3. AI-Native Engineering: Reinventing the Process
AI-Native Engineering is the complete redesign of the SDLC (software development lifecycle) with AI as the architectural foundation, not as an added tool. This difference is key because it does not involve incorporating AI into current processes but reinventing the software development lifecycle around the growing capabilities of AI agents.
The key difference:
- Vibecoding = human-AI collaboration to generate code quickly.
- Agentic AI = AI agents that act almost as “autonomous developers.”
- AI-Native Engineering = organizational, process, and tool transformation to place AI at the center.
The goal is not only to reduce costs and improve “Time to Market” but also to enhance product quality. As Inbal Shani, CPO and head of R&D at Twilio, emphasizes: “With the implementation of AI, I believe the most relevant and unique change will be improvements in product quality, given the ability to better analyze, synthesize information, and make recommendations.”
4. Context Engineering: Achieving the Necessary Scale
Scaling the previous capabilities at an enterprise level requires agentic tools but also equipping AI with sufficient knowledge of the organization and its methodology so that it acts as a team member and not as an isolated silo. This is key to generating quality software that can be used in production and is achieved through Context Engineering.

The Transformative Impact on the Complete SDLC
Unprecedented Acceleration
We have only just begun this transformation, and the numbers are already impressive. For the past year, in all meetings with partners, I ask the same question: what impact is the introduction of AI having on the SDLC?
Without going to the most optimistic, I would say:
- 30% increase in productivity for Product Managers.
- 20% acceleration in time-to-market on a 6-month project, potentially much higher if AI is incorporated not only in development but also in definition, architecture, testing, deployment, and maintenance.
- In smaller projects or MVPs of 8-10 weeks, the acceleration can be 60% or more.
These figures align with studies from analysts like McKinsey or Deloitte, and with what we are starting to observe in our own experience.
Role Convergence
In smaller projects, an emerging trend is the convergence of roles. Now a Product Manager can take on not only their traditional responsibilities but also those of development, testing, or even deployment. A UX designer can create an interface and turn it into executable code, a front-end developer can develop a back-end, and a back-end developer can launch deployments.
In these projects, the barriers between these traditional roles are fading, and product teams can be more flexible and agile. In more complex projects, specialization remains a value that AI helps to expand and extend. Engineers cease to be specialists and transform into managers of teams formed by human technicians and AI agents.
Change of Mindset
Without a doubt, it is the key to adopting this transformation and the main barrier to not adopting it. Many specialist roles such as product managers, architects, software engineers, testers, or DevOps feel threatened by AI, but paradoxically, adopting it makes them better in their specialty and extends their capabilities to other specialties.
This expansion of capabilities requires that the engineer has a global vision of the project, the organization, and its methodology. Those specialists without a global vision are at the greatest risk of displacement by AI.
In AI-Native Engineering teams, all team members must have a global vision of the project and its context, from the standards and best practices to apply in their role to those of their colleagues.
To reduce that cognitive load, it is important that teams work not only with AI tools but also with processes designed for this that incorporate common standards shared by humans and AI. Again, we talk about Context Engineering.
Code Security and Quality
Speed should never have been measured in lines of code, but now, even less so. The volume of lines of code is growing exponentially, making traditional security and quality tools insufficient.
Fortunately, AI can also be employed in these tasks. Quality and Security must be part of this change in processes; otherwise, they will not be able to keep pace.
At this point, Context Engineering also comes into play: being able to inject quality and security requirements into AI agents allows for the generation of quality and secure products from the design stage.
Managing Organizational Expectations
There is still immaturity in the market, and while experiences are very positive, it is not possible to guarantee consistent results across all technologies, organizations, and types of projects.
Again, having contexts created for AI agents is key to achieving consistent results. Without these contexts, the outcome depends much more on the experience of the technician or domain expert coordinating these agents.
Why Are We Facing a Different Transformation?
Having lived through the transition from mainframe to client-server, from waterfall to Agile, from monoliths to microservices, from on-premise to cloud, I can confidently say that this transformation is qualitatively different.
1. Unprecedented Adoption Speed
Agile took over a decade to be widely adopted since its Manifesto in 2001.
DevOps took years to mature from its beginnings in the mid-2000s.
Platform Engineering is still not applied in many organizations.
The use of AI in the SDLC has recently emerged and grown exponentially. According to studies, 92% of developers based in the U.S. are already using AI coding tools. GitHub Copilot has over a million active developers and more than 20,000 adopting organizations.
2. Radical Democratization
Previously, each phase of evolution made development more accessible, but always required substantial technical knowledge.
Now, for the first time, domain experts without programming training can create functional software. This is not a hypothesis but a real observation.
Domain experts who have previously worked with low-code platforms are making the leap without training into an environment where they do not have the restrictions of the past and jump from dragging figures to writing text. This does not mean that all employees in a company should do so — just as not everyone was capable of using low-code — nor that they can undertake any type of project, but it is a fact that their capabilities are multiplying.
3. Emerging Autonomy
Until now, tools always required continuous human direction.
Autonomous agents like Devin or Blitzy can take an issue from Jira or GitHub, plan an approach, implement a solution, test it, and create a pull request. All without continuous human intervention.
4. Fundamental Economic Change
The cost of creating software is falling dramatically. This has implications that we are beginning to observe:
- What will happen to the SaaS market when “build” is cheaper than “buy”?
- How does outsourcing change when you can have “teams” of AI agents at a fraction of the cost?
- What does it mean for product strategies when prototyping is almost free?
5. Convergence of SDLC Phases
The traditional phases —requirements, architecture, design, development, testing, deployment, operation— are increasingly intertwined and can occur almost simultaneously. Moreover, in mature environments, they can be operated by agents coordinated and supervised by humans. What role do we assign to these people?
6. Continuous Learning
If I have always believed in long-life learning, now it is more necessary than ever. We can dodge reality for a while, but it is not a sustainable strategy.
At this moment, all organizations are trying to learn how to adopt these new ways of working, which opens the opportunity for honest conversations in which to share experiences and learnings, successes, and failures that we are all experiencing.
Conclusion: The Road Ahead
If we mentally return to the world in which I started, we have come a long way with many changes, but there have always been constant foundations.
But the most exciting thing is that we are not only automating what we did, but fundamentally changing what we do and how we do it.
This revolution affects:
- Speed: From months to days, from days to hours.
- Accessibility: From specialists to domain experts.
- Autonomy: From directed tools to collaborative agents.
- Economics: From capital-intensive to marginal cost close to zero.
- Fundamental change in processes: With the convergence of SDLC phases.
For the transformation leader, the digital product manager, the engineering manager, this represents both an opportunity and a challenge:
The opportunity:
- Design not only functionalities but collaborative human-AI flows.
- Make the promise of digital products a reality with unprecedented speed, quality, and adaptability.
- Transform the way we relate to users, giving them greater autonomy to focus on what matters.
- Position themselves as pioneers of the AI-Native Engineering transformation.
The challenge:
- Manage the organizational transition without losing operational stability.
- Develop new competencies in existing teams.
- Ensure governance and quality in a more autonomous world.
- Lead the necessary cultural change while delivering business results.
It is not just about how much code we write, but about what value we deliver and how we adapt to constant change to remain relevant in the market. The future of Software Engineering is AI-Native Engineering. And that future is already here.
Topics to Explore Further
The topic of this post is extensive and exciting, leaving me with a sense of superficiality. Some points I would like to delve into are:
- Context Engineering: the new critical competency to inject into AI the knowledge necessary to apply it effectively in the SDLC.
- The impact on the Low-code and SaaS market: what impact will it have on these models when the “total cost of ownership” (TCO) is drastically reduced with AI-native engineering?
- The new SDLC: what are the new processes that will allow us to radically improve quality and speed of delivery, achieving the promised hyper-agility?
- Competencies: how competencies evolve in the world of AI-Native Engineering.
- Companies: those leading the change and those enabling it.
References
- 1. McKinsey & Company (February 2025). “How an AI-enabled software product development life cycle will fuel innovation.”
- 2. Bain & Company (2024). “State of the Art of Agentic AI Transformation.”
- 3. Deloitte Insights (2025). “Autonomous generative AI agents: Under development.”
- 5. Broadcom (2024). “Mainframe and Digital Transformation: Are You Driving Better Business Outcomes?“
- 6. unite.ai (June 2025). “Cursor AI reaches a valuation of $9.9 billion with a massive $900 million raise.”
- 7. parentesis.media (July 2025). “Cognition acquires Windsurf after a weekend of hires, failed offers, and million-dollar figures.”
2 + 2 = 5
I leave you with this small final reflection: 2 + 2 = 5. A critique from Radiohead that each of us can interpret differently, and I hope to hear it live very soon.
Tags
Related posts
Digital Transformation: Beyond Technology
In Digital Transformation, purpose and culture are more important than technology itself.
PROTOPÍA: get familiar with the term
> "Let's avoid the DYSTOPIA (...) > "Let's avoid the DYSTOPIA (...)
BIG TECH CAPITALISM by Evgeny Morozov
'Big Tech Capitalism' shares many of the issues I describe in 'The End of Inertia' but does not approach them in the same way, as its author starts from an e...
All opinions expressed on this blog are personal and do not represent those of any company or organization with which I collaborate.