Abstract:
This comprehensive study explores the influence of AI tools on quality and
efficiency within the Software Development Life Cycle (SDLC). Through a controlled
experiment comparing an AI-assisted team with a non-AI team, the findings revealed
substantial improvements in task completion speed, defect reduction, and cost savings,
affirming that AI tools can significantly enhance project outcomes when strategically
integrated.
Objective:
The primary goal of this research was to assess the effectiveness of AI tools in
optimizing both efficiency and quality across SDLC processes. The study examined both
qualitative and quantitative improvements, evaluating market-leading tools from
requirement gathering through feature delivery.
Methodology:
A controlled experiment was conducted involving two development teams
tasked with building identical applications under the same initial requirements and
support structure. The AI-enabled team used carefully selected tools, while the non-AI
team worked without them. Performance and quality metrics, including task completion
speed, defect density, and cost savings, were meticulously tracked to ensure an accurate
comparative analysis.
Tools Evaluated:
1. Read AI: Applied for requirement gathering and refinement meetings. It facilitated
transcription and generated actionable insights, enhancing clarity and traceability
of requirements.
2. ChatGPT Plus: Utilized for drafting user stories and test cases following Behavior Driven Development (BDD) principles. It improved acceptance criteria and expanded test coverage by suggesting edge cases, resulting in more robust test strategies.
3. Uizard: Employed for UI/UX design automation, enabling the rapid generation of
wireframes and prototypes from user stories and examples, significantly reducing
rework and enhancing stakeholder feedback.
4. GitHub Copilot: Integrated into Visual Studio and Visual Studio Code for code
generation, error debugging, and pipeline configuration. It also supported
infrastructure as code (IaC) development and reduced repetitive coding tasks, accelerating development timelines.
Scope:
The project involved creating a Delivery Oversight Dashboard for executive
stakeholders, offering a real-time overview of project health using KPI indicators such as
the Delivery Management Index (DMI) and Billing Hours Expansion Rate (BHER). The MVP
included real-time KPI displays, structured data input, and a role-based access control
model.
Results:
Performance Metrics:
• Product Owner Tasks: 76% faster with Read AI and ChatGPT, improving
requirement clarity and backlog management.
• QA Tasks: 41% faster with ChatGPT, enhancing test quality and reducing bug rates.
• UX Design: 45% faster using Read AI, ChatGPT, and Uizard, leading to quicker
design iterations and more effective stakeholder feedback.
• Backend Development: 11% faster using GitHub Copilot, aided by enhanced user
stories and feature descriptions.
• Frontend Development: 13% faster using GitHub Copilot, benefiting from clearer
user stories and requirements.
Delivery Metrics:
• Cycle Time: 12% reduction due to streamlined development processes.
• Lead Time: 38% faster, facilitated by improved collaboration and quality
enhancements at each stage of the SDLC.
Quality Metrics:
• Defect Density: 62% reduction, reflecting improved testing practices and clearer
requirements.
• Requirements Adherence: 36% improvement due to better traceability and
requirement clarity.
• DevOps Maturity: 17% lower, indicating a need for further process refinement
when relying heavily on AI tools.
• Code Quality: 21% lower, highlighting the importance of complementary manual
reviews in complex coding tasks.
Cost Savings:
• 38% reduction in overall costs using AI tools with a part-time team setup.
• Estimated Monthly Savings: $10,374 based on a $55/hr rate and AI tool costs.
Conclusion:
The integration of AI tools into the SDLC significantly enhanced efficiency,
reduced defect density, and resulted in substantial cost savings. However, the slight
reduction in DevOps maturity and code quality highlights the importance of balancing AI
automation with manual reviews, especially for complex design and architectural tasks.
The tools were most effective in requirement translation, test case generation, and
accelerating UI/UX design.
Future Work:
• Extend the study to larger-scale client implementations.
• Provide specialized training on advanced prompt engineering techniques for
optimized tool utilization.
• Explore additional AI tools for broader SDLC support and enhanced code review
capabilities.
Glossary:
• SDLC: Software Development Life Cycle
• PO: Product Owner
• QA: Quality Assurance
• BDD: Behavior-Driven Development
• DMI: Delivery Management Index
• BHER: Billing Hours Expansion Rate
References:
• GitHub Copilot Research: GitHub Blog
• Internal experiments led by Matias, Softensity
Acknowledgments:
We extend our gratitude to Leo for personnel training and data collection, Gerson for research management, and Monika for providing critical guidance
throughout the project.