AG is a data powerhouse. Our innovative data platform and strong data culture are revolutionizing our approach to business challenges, exemplified by our adept handling of the 2021 floods. Discover how strategic alignment with business objectives, enhanced data literacy, and advanced technology like Python, R, PowerBI, and on the Azure platform helped us adopt data and are propelling us towards a future rich in GenAI and machine learning possibilities.
TOM DONAS
Head of Business Solutions
Data Intelligence
Data-centric by nature
Next, we created a model to estimate the number of impacted homes and the expected damages caused by the floods. Our forecast of 1.5 to 2 billion euros in claims came true, with the total damage claims for traditional houses amounting to 1.9 billion euros across the entire insurance industry. This project was monumental for several reasons:
- It enabled us to proactively reach out to our clients with a detailed assessment of their damages, smoothening the claims process and swiftly fulfilling payouts during a stressful time. Thanks to our precise data, we were able to quickly.
- We shared with the regional government and authorities the insights of our model and experience in order to properly assess the impacted areas and citizens.
- The astronomical cost caused by the floods surpassed a certain threshold, requiring the government to contribute to the payout of the damages claims. Our detailed model facilitated the talks with our government.
- Finally, it also raised the alarm in our industry, as we were the first insurer capable of providing this information. It showed that data is critical in times of crisis.
This feat was made possible through our historic commitment to data-centric practices and advanced analytics skills. It stems from measures that have consistently placed data at the heart of our entire value chain.
Data has to be aligned with business strategies
To propel data center stage, it must positively impact business outcomes. So, we ensure our data strategy is aligned with the overarching business strategies. We don't do data for the sake of it; our strategy is clear: we want to help provide user-friendly applications, systems, and products. This is made easy by our colleagues. They’ve embraced data as key in their projects, and involve us from the genesis of a project. They bought into the idea that our findings help them create relevant and user-friendly applications and help them improve business processes.
To that end, we define use cases that serve the goals and needs of our business counterparts. Defining business-centric data projects maximizes the ROI of our efforts and helps our colleagues gauge the value our data efforts might have.
Increasing data literacy for data adoption
Understanding the value of our data projects and data is as crucial as the strategic alignment. Our non-data colleagues will only appreciate our efforts if they understand it. So, we've developed upskilling sessions alongside my colleague Patrick Sergysels' team. During these sessions our audience learn crucial data ownership and governance principles. They also acquire the tools to untangle data complexity and are made aware of the importance and consequences of their role as data stewards.
The second crucial pillar of data adoption is reporting. We make a concerted effort to generate timely, easily digestible, and reliable data reports. Data can only become a center piece of our workflows, if our business colleagues conceptually grasp its potential and understand project-specific reports.
Data quality is key
Technology: the great enabler
Although these efforts wouldn't be possible without the excellent technical foundations laid by our data scientists and engineers. Last year, we built and started migrating our data systems from a SAS on-premises data platform to the cloud. We wanted to be able to store and process high volumes of structured and unstructured data efficiently, increase our scalability and provide the services our colleagues and clients want.
Our new platform was built in Microsoft Azure Synapse Analytics, focusing on automation, standardization, and cost efficiency. Our architecture combines Azure Databricks and Azure Data Lake Storage Gen2 for handling large data volumes. We use Azure SQL DB for data automation, Synapse Dedicated SQL pools for data distribution, and Azure Data Factory for mapping data flows and ETL processes. Additionally, our colleagues provided a self-service area using Microsoft Power BI and Azure tools for end-users and created a custom PySpark framework for efficient data management.
Looking ahead: leveraging GenAI and ML
Our cloud-based platform and new cloud-ready infrastructure allow us to leverage new tools and emerging technologies like generative AI and ML. These technologies help us leverage previously underused unstructured data and turn futuristic pipedreams into feasible applications. Gen AI and ML will turn everything around us into usable data.
Let’s take the car claims process, for instance. We're looking to create a model that uses unstructured data to assess the type and the cost of the damages based on pictures of the damages sent by the client. Next, the model would interpret the picture and compare it to our vast volume of data to assess the extent of the damages. When we have these insights, we unlock a wide variety of process improvements for our customers and partners, like making an appointment at a bodyshop or with an insurance expert.
Aside from automating claims processes, I expect emerging technologies to change how we work with data. Manually creating data reports will become a thing of the past and be fully automated. We'll be able to interact with a chatbot, request a detailed data report and get it instantly. This will democratize the data and information, giving our business units in-depth and clear insights in the blink of an eye. Due to the omnipresence of data and the speed at which it will be processed, data-centric decision making will become the norm.