Story Point Estimation Using Transformer Based Agents
Dosyalar
Tarih
Yazarlar
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Erişim Hakkı
Özet
Reliable story-pointing is key to sprint planning, staffing, and capacity, but traditional methods rely on tacit knowledge that fades. Automating and standardizing with Large Language Models reduces bias and improves predictability. Models like GPT-4 decompose tasks, solve subproblems, and propose candidates; however, closed architectures limit real-time data and expert input, sometimes causing hallucinations. Fine-tuning helps but requires rich domain data and custom weights, risking weaker generalization. We tackle these issues by enhancing agile story point estimation with extended positional encoding and a multi-agent weighting scheme in the model head. In our transformer model, we add a shared “knowledge pool” of specialized agents in the environment, each trained on distinct facets of project data. Using 12,017 story point records from 8 open source projects, we adopt an 80/20 train/holdout split; from the holdout, 80% populates the knowledge pool and 20% is reserved for evaluation. Our system uses hyperparameters (epochs = 3, batch size = 16, learning rate = 2×10−5). The proposed architecture achieves 70.81% accuracy versus 42.62% for standard BERT, a relative gain of ≈ 66.1%, indicating that domain-specialized agents with refined encoding substantially improve Large Language Model-based estimations and support AI-assisted agile management.










