Our #PosterPitchSpeakers have taken up the task to perform on stage! They will each get 3 minutes and 3 slides to pitch their use case / story one after the other. The challenge will be to get their storyline on point and get you interested in the second part of the session. After a Coffee Break the participants will get the chance to speak to our #PosterPitchSpeakers in detail in one of our workshop studios. Here the speakers will have their own posters at hand to do a deep-dive into their topics. Don’t miss this exclusive opportunity and discuss with the experts on-site! For more information, click on our Meet the Poster-Pitch-Speakers.
Nowadays Data Science departments are a crucial building block of any company and they are continuously growing in size. However, the expectations on data scientists are often ambiguous, making sometimes the data scientists themselves to seek their true mission within the company. Why is it that data scientists may not be able to contribute their expertise in machine learning? It’s easy to point fingers to the different (non-data) departments and blame them for not asking the right questions! This sounds too simple for us. We see things differently: the responsibility lies with the data scientists themselves. We are the experts and know the possibilities and limits of data science, so based on this knowledge we need to take responsibility to search and develop new potential Data Products. Lame excuses like : „There is no data available“ or „There is no valuable use case for machine learning“ are finally a thing of the past. Most departments at Cluno were initially unaware of what would be feasible using data science. Certainly, they were looking for solutions, but this was still in its infancy. In order to grow quickly, one had to look for new approaches and think outside the box. We found ways to create an added value for the departments that they would never have dared to dream of. Join us as we discuss the question of who is responsible for the successful implementation of innovative data products within a company. We look forward to exchanging experiences with you and sharing information on our data journey at Cluno.
Agile methods have gained enormous relevance through digitalization.
Today, agility stands for maximum flexibility, rapid adaptation to changing market needs and user focus. Almost everyone has heard of Scrum, Design Thinking, Lean, Business Model Canvas or Design Sprints.
The agile jungle is densely covered with these buzzwords. But this can quickly make you lose track.
Let us guide you through the jungle.
We will explain which methods can be used sensibly in which project phase based on the recommendations of the respective founders and the knowledge we gained in different projects over the last couple of years.
You will get a broad overview of today’s most commonly used agile methods while you will experience some of them interactively by yourself.
A case study showing how Lloyds Banking Group uses data journalism, a combination of storytelling and data visualization, to help executives to take data driven decision.
Speaker: Isabelle Marchand (Lloyds Banking Group)
In the corporate context, the consistent development of a future-oriented Data & Analytics platform is not always easy. Due to usual pitfalls and sometimes strong restrictions of a long-term cloud strategy, initiatives and projects of this kind are often more than a tightrope act.
We will show you how to master this type of challenge and that there are always alternatives in architecture & technology selection for the migration of first business-relevant applications into the cloud and the development of data & analytics capabilities. Apache Nifi instead of Data Factory? Elastic/Kibana instead of Application Insights? Kubernetes from day 1? Honestly – why not? And definitely again! „Start doing beats waiting!“
What is important here is a competent and flexible partner who dares to go off the beaten track, finds creative solutions and proves that these really lead to the goal. Thus virtue emerges from necessity and a pragmatic solution from the idea.
How can we earn money with machine learning services? Most likely, if we are able to scale them across multiple customers and industries. However, B2B software often stands out for its options to customize and configure it. As a result, every customer might have a different data structure, which makes it hard to develop common machine learning services. How can we deal with this challenge? And to what extent can we automatically handle variance in training data?
In this talk, we are going to discuss the idea of automated machine learning, elaborate on its usefulness for B2B software and share our learnings along the way. Specifically, we provide insights into the journey of developing machine learning services for more than 800 B2B platform customers.
Speaker: Adrian Engelbrecht (Serviceware SE)
Often data science projects start with prototyping in Jupyter notebooks. But how to tackle the challenge to bring your newly trained models in production and let users take advantage of what you developed? In this talk I will present an automation project in which we use OCR, NLP and Machine Learning to retrieve key figures from deal processing instructions of external banks. I will guide through the whole development LifeCycle – from first prototyping, over development of a REST API and containerization of the service to a final product including a user interface. This final solution is then shown in a demo version to provide you the full user experience!
Speaker: Katalin Westhoff (Siemens Financial Services)
Since for the first time sprinters have been measured in the 1920s, measuring and data collection has a long tradition, within sport. Especially in team sports the amount of data has grown more and more in recent years.
This presentation will present the topic of data analysis in sports using practical examples and will also relate to companies to show the numerous overlaps. Questions like “How could they know, how far players are running and how many passes they perform?” will be answered.
Due to the popularity of football and the coverage of the entire spectrum from data collection to the presentation of analyses, this presentation provides interesting content for both beginners and experts.
Overview: Basic overview of data analysis in sports – How data are collected in sport? – Overview of methods for data analysis in football.
Which of your topics are covered? Data Science & Machine Learning (examples at football) – Data Visualisation & Analytics (examples at football) – Infrastructure & Databases (example of a Club Information System)
Speaker: Thomas Blobel (Technische Universtität München / FC Ingolstadt)