Challenges & Goals
Clearwater Analytics, a financial technology provider, relied on analysts to manually compile and summarize complex investment reports for its clients. This process was labor intensive and time consuming, which limited the number of reports that could be produced and reviewed in a given period. The organization aimed to explore generative AI as a solution to automate report drafting, thereby greatly reducing the manual workload while maintaining (or even improving) the accuracy of the summaries. The Proof of Concept (POC) needed to demonstrate not only that an AI system could summarize financial reports effectively, but also that it could be integrated into Clearwater’s existing workflows with proper monitoring and controls. Equally important, the POC had to convincingly showcase the value of this technology to gain leadership buy in for broader generative AI adoption.
Solution
In this engagement, Ahmed Ali spearheaded the development of an end to end generative AI pipeline using Amazon SageMaker JumpStart. He selected and fine tuned a suitable large language model (LLM) from JumpStart’s model hub that could interpret Clearwater’s investment data and generate coherent, professional report summaries. Using Amazon Bedrock Agents, the solution was able to securely integrate the LLM into Clearwater’s environment, ensuring the model had access to relevant financial data (with appropriate permission controls) and adhered to the company’s security and compliance requirements while generating content.
To optimize the model’s performance, Ahmed orchestrated a robust training and data pipeline using SageMaker Pipelines and Data Wrangler. He prepared historical investment reports and their executive summaries as a training dataset, performing data cleaning and text preprocessing in a reproducible manner. Through automated hyperparameter tuning experiments, the model’s summarization accuracy improved by roughly 40% compared to initial baselines. The entire workflow from data preparation and model fine tuning to inference generation was automated with AWS Step Functions and triggered via AWS Lambda, eliminating about 75% of the manual steps that analysts previously had to perform. This automation ensured that once new financial data was available, the system could generate a draft report with minimal human intervention.
Ahmed also put safeguards in place to maintain the model’s reliability over time. He integrated Amazon CloudWatch metrics and implemented a custom drift detection routine to continuously evaluate the model’s outputs against live data patterns; if the summarization quality deviated or the input data distribution shifted beyond set thresholds, the system would automatically alert the team via CloudWatch Alarms and AWS Lambda notifications. This proactive monitoring meant the POC ran in a controlled environment with confidence in the consistency of results. Throughout the project, Ahmed documented the architecture and findings in a detailed technical whitepaper and led live demonstrations for Clearwater’s executives. By clearly communicating the ROI of the solution highlighting the dramatic reduction in manual effort and the improvements in accuracy the POC secured executive sponsorship to scale up generative AI initiatives at Clearwater.
Key Results
- Automated 75% of the report preparation workflow through AI driven drafting and orchestration, dramatically reducing manual analyst effort.
- Improved summarization accuracy by ~40% via model fine tuning, resulting in higher quality and more consistent investment report summaries.
- Secured leadership buy in for broader generative AI deployment, as the successful POC demonstrated clear productivity gains and compliance aware design.