At System Verification, we take pride in fostering collaborations with industry leaders, and our recent visit from our customer, Alfa Laval, was no exception. Hans Troiza, CTO IT of Alfa Laval, along with two members from their AI Centre of Excellence (AI CoE), visited our Malmö office to share their journey into AI and machine learning (ML). Their presentation shed light on the evolution of AI at Alfa Laval, the challenges they’ve faced, and how they are shaping the future of responsible AI.
Alfa Laval has been working with AI and ML since 1994. Over the years, they have refined their approach, but it was not many years ago when their risk management team flagged AI as an area needing structured governance. This led to the formation of the AI Centre of Excellence, yearly 2024, tasked with developing standards, processes, and best practices for AI adoption across the organization. The AI CoE is a time-limited project group focused on ensuring AI is used responsibly and strategically, setting the frameworks that will guide Alfa Laval’s future AI endeavors.
A key takeaway from Alfa Laval’s AI strategy is the foundation of Responsible AI, which underpins all their initiatives. The goal is to protect both their employees and customers, while also addressing long-term challenges that AI can solve. Hans emphasized that while businesses often tend to focus on immediate challenges, it’s vital to look ahead to the future impact AI can have.
Trust is another cornerstone of their approach. Building a strong, trusting relationship between the AI CoE and the broader business is essential to driving AI adoption within the organization.
Alfa Laval’s AI initiatives are divided into two primary focus areas:
Throughout the presentation, Hans and his team reiterated the importance of adhering to ethical guidelines. Alfa Laval’s AI efforts are guided by principles that are people-centric, responsible, fair, and transparent. Protecting integrity and complying with regulations, such as the proposed EU AI Act, are top priorities for the company. Their AI CoE plays a key role in aligning AI projects with governance standards, ensuring that the technology is used safely and ethically.
Hans and his team shared several AI use cases that are currently being implemented at Alfa Laval:
These use cases exemplify the diverse ways in which AI is being used to improve both operational efficiency and product innovation.
One of the more practical and easily accessible examples of AI at work in Alfa Laval’s daily operations is the use of Microsoft CoPilot and similar tools. Selected teams are already using this tool to increase productivity, and Alfa Laval is taking a thoughtful approach to scaling its use. Hans highlighted the importance of regular feedback sessions with CoPilot users and emphasized that further training and education will be critical to expanding the tool’s adoption.
Alfa Laval’s AI journey, shared during their visit to our Malmö office, was truly inspiring. Their commitment to Responsible AI and methodical approach to everyday efficiency and groundbreaking innovation exemplify the careful balance needed to successfully integrate AI into an organization.
At System Verification, we’re excited to be a part of this journey, supporting Alfa Laval in achieving their goals and learning from their experiences. Our collaboration is a testament to the power of partnerships in advancing AI and QA practices, and we look forward to seeing how they continue to lead the way in responsible and impactful AI development.