Story Point Estimation Using Transformer Based Agents

Yükleniyor...
Küçük Resim

Tarih

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Berlin Universities

Erişim Hakkı

info:eu-repo/semantics/openAccess

Özet

Reliable story-pointing is key to sprint planning, staffing, and capacity, but traditional methods rely on tacit knowledge that fades. Automating and standardizing with Large Language Models reduces bias and improves predictability. Models like GPT-4 decompose tasks, solve subproblems, and propose candidates; however, closed architectures limit real-time data and expert input, sometimes causing hallucinations. Fine-tuning helps but requires rich domain data and custom weights, risking weaker generalization. We tackle these issues by enhancing agile story point estimation with extended positional encoding and a multi-agent weighting scheme in the model head. In our transformer model, we add a shared “knowledge pool” of specialized agents in the environment, each trained on distinct facets of project data. Using 12,017 story point records from 8 open source projects, we adopt an 80/20 train/holdout split; from the holdout, 80% populates the knowledge pool and 20% is reserved for evaluation. Our system uses hyperparameters (epochs = 3, batch size = 16, learning rate = 2×10−5). The proposed architecture achieves 70.81% accuracy versus 42.62% for standard BERT, a relative gain of ≈ 66.1%, indicating that domain-specialized agents with refined encoding substantially improve Large Language Model-based estimations and support AI-assisted agile management.

Açıklama

Anahtar Kelimeler

Transformer Architecture, Story Point Estimation, Multi-Agent System

Kaynak

Electronic Communications of the EASST

WoS Q Değeri

Scopus Q Değeri

Cilt

85

Sayı

Künye

BÜYÜK, Oğuzhan Oktay & Ali NİZAM. "Story Point Estimation Using Transformer Based Agents". Electronic Communications of the EASST, 85 (2025): 1-13.

Onay

İnceleme

Ekleyen

Referans Veren