Home DEVELOPER AI Navigator #5: Transformer as a Transformer of Healthcare

AI Navigator #5: Transformer as a Transformer of Healthcare

0


Welcome to the fifth edition of the DOAG KI Community’s AI Navigator column!

Advertisement





Siming Beyer is a computer scientist and AI scientist. As a young scientist at Friedrich-Alexander University, she is developing novel AI algorithms for various areas of application. In addition, she is responsible for the development of strategic research partnerships at a leading medical technology company to advance the use of AI algorithms in healthcare.




As a physicist and computer scientist, Björn Heizmann has been working on development, marketing and strategy at a leading medical technology company for more than two decades. Björn also conducts research on medical imaging and artificial intelligence at the Friedrich-Alexander University Erlangen-Nuremberg.

Don’t miss: Into a new era of data science with GenAI, LLM and KG

As scientists in the field of medical technology and AI, we have been following the development of artificial intelligence and its application in medicine over the past fifteen years – from pattern recognition and machine learning to deep learning and generative AI. Recently we have often faced the question of what AI actually brings to medicine.

Why, of all things, should AI drive the transformation of medicine? Medicine is a deeply human discipline characterized by trust. Who wants to be diagnosed or even treated by a machine?

In fact, health care is a highly complex process chain in which many decisions are made on an interdisciplinary basis. And it is these decisions that are rapidly changing with technological possibilities. For many years, when we feel ill, we have been able to use the Internet and special apps to initially interpret the symptoms. Web searches and portals also help when choosing a family doctor or specialist for treatment. ChatGPT has already passed written medical exams in the United States and can be surveilled just like the Internet (without guarantees, of course). Last but not least, we also measure ourselves with mobile devices and smartwatches to stay fit or to notice any health abnormality.

Almost every one of these options for patients has explicit AI present to a greater or lesser extent. Google Search has been working with BERT since 2019, one of the first commercially used Transformer models. Today’s ChatGPT – and comparable large language models (LLMs) such as Llama 3 or Mistral – use the attention mechanism of the Transformer architecture to answer complex questions. Internet portals and smartwatches rely on AI to evaluate user data and needs and generate personalized suggestions. So AI has long accompanied us in medicine, with patients. But does it keep us healthy and happy?

Medical treatment paths often seem complex and intriguing to patients, but in principle they follow well-defined, understandable steps. The first step is to clarify the symptoms and general well-being in a doctor-patient discussion. Depending on the results and needs, additional diagnostic steps are taken, from determining vital parameters to laboratory markers and medical imaging. On this basis, the doctor decides on therapeutic steps such as an operation or prescribing medication.

Ideally, the patient will then recover, otherwise the above cycle will begin again. During this process, referrals to specialists or consultation with medical colleagues are often necessary, resulting in even more complex treatment paths.

AI can provide assistance in these clinical processes. Contrary to what is often believed or even feared, this is not about replacing doctors or nurses with AI and robots. To understand why AI is becoming so important to this part of medical care at this time, it helps to take a look at the history of machine learning, and in particular the development of transformers or generative AI mentioned above.

AI began in the 1950s with Alan Turing as an outstanding pioneer. He asked a far-reaching question: if humans use available information and logical reasoning to solve problems, why shouldn’t machines be able to do the same? Unfortunately, the implementation of this AI approach was technically impossible at the time.

In 1956, a landmark conference took place – the “Dartmouth Summer Research Project on Artificial Intelligence”. This ensured that AI research flourished until the 1970s. During this period, the so-called Rosenblatt perceptron and the Bellman equation were proposed, which form the theoretical basis for modern deep learning and reinforcement learning. Over time, investors lost patience and the so-called AI winter set in. This cyclical pattern of scientific breakthroughs, excitement and investment, followed by high, miraculous expectations, disappointment and declining interest was repeated several times. In the 90s and 2000s, the term AI practically disappeared from public awareness.




(Image: DOAG)

Beyond your own company’s experience and the industry expertise of data scientists, it is helpful to learn from the best practices of other companies and applications. The AI ​​Navigator Conference is ideal forWhich will take place in Nuremberg on November 20 and 21, 2024.

Organised by DOAG, Heise Medien and D’Ge’Pole, the event is the central platform for decision-makers from business, technology and society to exchange views about the practical applications and challenges of AI in Germany. The conference focuses on practical benefits, with participants gaining direct insight into the successful implementation and adaptation of AI systems.

In addition, the AI ​​Navigator Conference promotes the exchange of best practices and enables the establishment of strategic partnerships to understand the dynamic developments in the AI ​​industry and explore innovative solutions that are already pushing the boundaries of what is possible. Transforming technology, business and society.

Ironically, AI research achieved many technological breakthroughs in the absence of government funding and public enthusiasm. IBM’s Deep Blue defeated Garry Kasparov. Speech recognition software gradually made its way into many devices and even into Windows with Cortana. Emotion recognition was scientifically advanced and commercially used in first-level support. Google’s AlphaGo beat the best human player of the time, Lee Sedol, in the game of Go, which was previously thought to be impossible for machines to learn. And an AI made a hair appointment in public, a nice reversal of our expectations.

Public interest has returned since the public presentation of ChatGPT in November 2022 and the sense of optimism is even greater this time. Algorithmically, the core is still linear algebra and probability calculations. The basic way in which we train machines to become intelligent has also remained the same.

The renewed boom is mainly due to improved equipment: modern GPU and ASIC architectures have several orders of magnitude higher computing resources and storage capacity than computers of the 1990s, and they are optimized for linear algebra operations. Nvidia’s stock market value, which has increased nearly twentyfold in the past five years, is essentially based on the realization: “It’s linear algebra, stupid!”.

The second reason is the rapidly increasing digitalization of many areas of life. We are generating, storing and processing more and more data, especially in the medical field. The amount of medical image data produced globally is currently in the order of one petabyte per year. This corresponds to about 500 billion fully printed DIN A4 pages.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version