Part of the Orange Group

Raiffeisen Bank

Raiffeisen Bank Hrvatska is transforming its data architecture towards a Data-Driven approach

Scroll down

Main benefits

Reduced impact of changes applied in source systems on data consumers
• The auto-generated ETL code limits human error factor in the data flows
• Aggregating several data sources creates a single source of truth for the data consumers
• Internal Agile teams build their solutions on top of the prepared data architecture
“Introducing Data Transformation Architecture enables Bank to shift towards becoming a data-driven organization sooner than later. BlueSoft supports us in that journey with technology and advisory expertise in a joint team effort from day one.”
Ivan Knezović, Executive Director of IT

Challenge

A heavily siloed analytical ecosystem caused issues in data integrity, consistency, and trust. As a company willing to follow modern architectures and best practices, the bank wanted to limit the number of errors, reduce operational risk, optimize OPEX costs, and improve data quality. The new data architecture was expected to solve the current issues and meet the demands for processing both structured and unstructured data. Such a transformation is a challenge from the organizational perspective, so the workload was implemented in line with the agile methodology.

Solution

To address all the challenges, our team proposed a Data Lake-based approach in Lambda Architecture as the TO-BE state. The solution’s key architecture assumption is the separation between replication and serving layers.
BlueSoft delivered an MVP that features a fully automated replication layer aggregating data from various data sources (legacy systems). Data ingestion is based on a metadata-driven development approach with automatic ETL code generation.
An agile joint team is currently developing the following layers, including batch and stream processing, together with a serving layer to democratize data across the organization.

Alibaba Cloud
Azure Advanced Threat Protection
Microsoft Defender Advanced Threat Protection
Concourse
Azure Monitor
Azure Sentinel
single -spa
Scala
Spark
BlueSoft Azure Cloud Assessment for ISV Partners
BroadLeaf
SAP on Azure
Confluence
Jira
Rocket Chat
ionic
Bootstrap
django
AngularJS
React
Vue.js
Drupal
Liferay
Crownpeak
Jaeger
Zipkin
Jiver
RIS
MuleSoft
Informatica
Talend
Tibco
Pentaho
Debezium
Attunity
Camel
WSO2
IBM WebShpere
WebMethods
Oracle Service Bus
Storm
Pulsar
RabbitMQ
Kafka
Flink
Spark
Nifi
Amazon Kinesis
HDFS
Gluster
Google Persistent Disk
Azure Disk Storage
Amazon S3
Amazon Elastic Block Store
Nginx
HA Proxy
Envoy
Consul
Netflix OSS Zuul
Istio
Linkerd
Netflix OSS Eureka
ZooKeeper
Kerberos
AWS Lambda
Serverless
Azure Functions
Google Cloud Functions
UIPath
Automation Anywhere
blueprism
WorkFusion
Attlassian
Docker Swarm
Mesos
Amazon ECS
Kubernetes
Oracle E-business suite
Google Home
Amazon Alexa
Rasa
Solr
Lucene
AWS Rekognition
Microsoft Cognitive Services
AWS Lex
Open NLP
AppDynamics
Google Stack Driver
Azure Monitor
Amazon CloudWatch
Zabbix
Vites
Sentry
Nagios
Grafana
Prometheus
Flutter
Android
iOS
Scikit learn
Spark
Google Cloud Machine Learning
Amazon Machine Learning
Azure Machine Learning
Mahout
Simil
Logstash
Elastic
Fluentd
Amazon Elastic Kubernetes Service
Google Kubernetes Engine
Azure Kubernetes Service
OpenShift
Vault
OAuth2 Proxy
KeyCloak
Sqoop
SAP Hybris
Magento
PostgreSQL
Redis
Neo4J
SQL Server
Oracle
MongoDB
MySQL
IBM DB2
Cassandra
Hbase
Hadoop
Gerrit
TeamCity
Jenkins
Gitlab
Google Cloud Build
Azure Pipelines
AWS Code Pipeline
Nexus
JFrog Artifactory
Google Container Registry
Azure Registry
Amazon ECR
Drools
SAG Suite
Aperte Workflow
Appian
Camunda
jBPM
Activiti
Tableau
Power BI
Qlik
SAP Business Objects
Birt
Jasper Reports
Terraform
AWS Cloud Formation
Ansible
Vagrant
Docker
Docker compose
Agama API
Kong
Spotify Luigi
Python
Java 8
HortonWorks Data Platform
Apache Hive
Google Cloud Platform
Apache Hadoop
Azure
Amazon Web Services

Results

Data Transformation Architecture is considered to serve as a new single source of truth for data stored in the bank’s ecosystem. Thanks to a certain level of automation, data flows are resilient, secure, and fast. The usage of best design patterns like Data Lake or Data Hub allows data consumers to reach data easily.
Working side-by-side with the bank’s team, we developed a truly interdisciplinary product team consisting of architects, analysts, developers, and ops specialists working in the Scrum framework.

Let's talk business

Discover What’s Possible For Your Business.

Together we can solve it.