• Published on: Jun 20, 2020
  • 4 minute read
  • By: Dr Rajan Choudhary

Artificial Intelligence In Healthcare

  • WhatsApp share link icon
  • copy & share link icon
  • twitter share link icon
  • facebook share link icon

Artificial intelligence. This phrase means different things to different people. To some, it conjures ideas of robots having the same intelligence and creativity as humans, able to do any tasks we instruct them, except better than us. To others it is a new and exciting tool, one that could revolutionise the way we work, but also the way labour is distributed in society. And for developers? They dread being asked to make an artificial intelligence system by people who have only heard buzzwords such as “machine learning” and “deep neural network” in headlines and blogs.

In this blog we will look at the basics of AI terminology, so we can understand what these terms really mean, and whether they will have an impact on healthcare.

WHAT IS ARTIFICIAL INTELLIGENCE?

Even this question is difficult to answer, as it enters the realms of philosophy and discussion over the meaning of intelligence. What makes a person intelligent? Is it their retained knowledge? Because a computer can store the entirety of known human knowledge on a disc. Is it understanding and following instructions? Or is it creativity, a skill even the average person may struggle with at times. We know one thing for sure, distilling a person’s intelligence down to a single IQ number is disingenuous and doesn’t represent true intelligence.

Similarly people define AI in different ways. A broad definition looks at the ability for a computer or programme to be able to respond autonomously to commands, to the changing environment around them, recognise audio or visual cues, process the information without strict defined rules and spit out a desired function.

The key features appear to be autonomy: the ability to function independent of a human controller or guide, and adaptability: the ability to work beyond strict rules and criteria, and function in situations or with inputs beyond their original programming.

In medicine, a “dumb” system could work with physical values, for instance blood results, compare them to a “normal range” and determine if results are abnormal (e.g if the patient has anaemia).

A smart “AI” would be able to look at a CT scan, notice subtle changes in the images, compare it against what a normal scan should look like, and identify the pathology. This is very difficult because normal scans can differ noticeably between patients, (for instance due to anatomical differences between people), and disease findings can be even more varied, unusual, abnormal. Human brains have incredibly complex pattern recognition systems – over a third of the human brain is dedicated to just visual processing. Imagine trying to re-create that in code.

At first people tried to emulate this with fixed programming. For instance, to teach a programme to recognise a bicycle, you would need to teach it to first exclude anything that is not a vehicle, then exclude anything that does not have wheels, has more than 2 wheels, has a frame connecting the two wheels, has a chain connecting the pedals and the rear wheels……and so on. All of this for a bike. Now imagine trying to code it to recognise subtle changes to cells under a microscope, to recognise cancer cells, to recognise an abnormal mass on a scan. Clearly this solution is very clunky, and simply not feasible.

MACHINE LEARNING

Modern AI systems have moved towards “machine learning”. This is a statistical technique that fits learnt models to inputted data, and “learns” by training models with known data sets. Instead of a person defining what a bicycle is, the model is flooded with thousands of pictures of bikes, and the programme forms its own rules to identify a bike. If this model is then shown a picture of a bike it will show the statistical likelihood of the picture being a bike. The system could be expanded by  further training the model with pictures of motorbikes, scooters and other two wheeled forms of transport. Now if given a picture, the model can determine what type of two wheel transport it has been shown.

The healthcare application can be simple – lets look at a radiology example.  Teach an AI model what normal lungs look like, then show it images of various pathologies such as pneumonia, fibrosis or even lung cancer. If fed enough images and variations of a type of disease, the AI’s statistical analysis might even find associations and patterns to identify a disease that a human radiologist would be unable to find.

NEURAL NETWORKS

A more complex form of machine learning is the neural network. Its name suggests it is analogous to the neurons in a human brain, though this analogy does not stretch much further. Neural networks split the image into various different components, analyse these components to see if it has variables and features before spitting out a decision.

The most complex forms of machine learning involve deep learning. These models utilise thousands of hidden features and has several layers of decision making and analysis before a decision is made. As computing power increases, the ability to create ever more complex models that can look at more complex 3 dimensional images full of dense information. These deep learning models have been able to identify cancer diagnoses in CT and MRI scans, diagnoses that have been missed by even the most expert consultants. They can also identify structures and patterns the human eyes cannot, and may end up being better at diagnoses than a highly trained specialist. Of course such diagnoses would still have to be checked by a doctor, as due to the medico-legal implications that could occur from incorrect diagnoses created by a computer utilising models even their programmers cannot understand.

NATURAL LANGUAGE PROCESSING

But the application of AI is not limited to identifying images and scans. One of the greatest hurdles a computer faces is trying to understand human speech. Dictation from speech to text is easy, but understanding the meaning of what was said, and trying to use that to create instructions or datasets, that’s hard. This is why the iPhone’s Siri or Google Assistant on Android phones seem so limited. They can only recognise certain set instructions such as “What is the weather” or “Set an alarm for…”. More complicated instructions or requests usually results in an error.

People don’t speak in simple sentences. If asked about their symptoms, every patient will use different sentence structures, adjectives, prioritise different symptoms depending on how it affects them, and create a narrative rather than a list of symptoms. Similarly when writing in patients notes, doctors will also use complex sentences, short-hands, structure their notes differently. Feeding this information to Siri would not output a clear diagnosis, but rather give the poor digital assistant a migraine.

Deep learning is being used to analyse natural speech to pick out the important information that will lead to a diagnosis, similar to how a medical student is trained when taking a history. If deployed successfully this would be invaluable in triaging patients based off the severity of their symptoms, and assigning them to the right specialists. 

It would also have huge implications for research. Identifying data is very labour and time intensive, and the costs of trawling through patient notes can significantly limit the feasibility of research studies. A deep learning AI system could read through the notes, identify all the important symptoms, how a patient is improving on a day to day basis and other subtle parameters, and do so without human supervision through thousands of cases without boredom or fatigue. The wealth of information available could significantly improve the quality of research performed.

Artificial Intelligence and the various buzzwords can be difficult to break down and digest. And certainly this blog will not answer all of your questions, and may leave you with more questions than you started with. But understanding the basics of AI will help in appreciating the effort that goes into creating these systems, and also acknowledge the hurdles that limit AI from becoming prevalent across healthcare.

At least for now. Progress in this field is constant. By next year the AI landscape may be very different.

Dr Rajan Choudhary

HEAD OF PRODUCTS, SECOND MEDIC INC UK

Read Blog
Virtual Cooking Class with Dietitian: A New Era of Healthy Eating in India

Virtual Cooking Class with Dietitian: A New Era of Healthy Eating in India

Healthy eating has become a top priority for individuals across India. With rising lifestyle diseases such as diabetes, hypertension, obesity and PCOS, food decisions now play a critical role in preventive healthcare. However, most people struggle with questions like what to cook, how to modify recipes, and how to balance nutrition with traditional Indian meals.

Virtual cooking classes with dietitians are transforming how Indians learn about food. They combine practical kitchen skills with scientific nutrition knowledge-something traditional cooking tutorials cannot offer. SecondMedic integrates expert dietitians, AI-driven nutrition analysis and preventive health frameworks to support individuals in building lifelong healthy eating habits.

This blog explores how virtual cooking classes work, why they matter and how they support long-term health.

 

Why India Needs Dietitian-Led Cooking Classes

Rising Lifestyle Diseases

The ICMR Nutrition and Metabolic Health Study reports alarming trends:

  • Over 100 million diabetic individuals

  • High prevalence of fatty liver

  • Vitamin deficiencies in large sections of the population

  • Increasing PCOS, thyroid disorders and obesity
     

Many of these conditions are strongly influenced by diet.

Lack of Nutrition Awareness

NFHS-5 highlights low dietary diversity among Indian households. People often overconsume oil, sugar and refined grains without realising the long-term impact.

Busy Lifestyles

Urban professionals struggle to plan meals due to:

  • Time constraints

  • Lack of structured nutrition knowledge

  • Dependence on takeaways and packaged food
     

Virtual cooking sessions solve these problems by offering guided, practical learning directly from home.

 

What Happens in a Virtual Cooking Class?

A SecondMedic virtual cooking class includes:

1. Live Demonstrations

Dietitians prepare recipes step-by-step while explaining:

  • Nutrient functions

  • Health benefits

  • Cooking techniques

  • Smart portion strategies
     

2. Ingredient Education

Participants learn about:

  • Low-GI alternatives

  • High-fibre grains

  • Clean protein sources

  • Anti-inflammatory spices

  • Healthy fats
     

3. Meal Planning Guidance

Classes often include weekly planning tips to simplify daily decisions.

4. Nutrient Breakdown

AI-based tools analyse the recipe’s:

  • Sugar load

  • Sodium balance

  • Protein density

  • Vitamin & mineral profile
     

5. Condition-Specific Variations

Recipes can be adapted for:

  • Diabetes

  • PCOS

  • Thyroid health

  • Heart health

  • Weight loss
     

This ensures suitability across lifestyles.

 

Benefits of Virtual Cooking Classes

1. Practical, Hands-On Learning

Participants cook alongside the dietitian, making learning interactive and easy to remember.

2. Prevention-Focused

Unlike regular cooking tutorials, these sessions emphasise preventive eating patterns recommended by WHO and NITI Aayog.

3. Customisable for Families

Healthy recipes become household-friendly, improving community nutrition.

4. Convenient and Accessible

Join from anywhere without travel or scheduling challenges.

5. Increases Long-Term Adherence

When people understand why a recipe is healthy, they adopt it more consistently.

 

Example Recipe Taught in Class

Vegetable Khichdi (Diabetes-Friendly Version):

  • Moong dal for high protein

  • Mixed vegetables for fibre

  • Minimal ghee

  • Brown rice/millet for lower GI

  • Turmeric + cumin for anti-inflammatory benefit
     

SecondMedic’s AI engine evaluates glycaemic impact and micronutrient density.

 

Integrating Virtual Cooking With Preventive Care

SecondMedic combines cooking classes with:

  • Teleconsultations

  • Diet assessments

  • AI nutrition scores

  • Weight and glucose monitoring

  • Lifestyle coaching
     

This creates a unified ecosystem for long-term behaviour change.

 

Conclusion

Virtual cooking classes with dietitians empower individuals to transform their daily meals into preventive healthcare tools. By teaching practical skills, nutrition fundamentals and personalised recipe adjustments, these classes make healthy eating accessible, enjoyable and sustainable.

SecondMedic is redefining preventive nutrition by blending expert guidance with digital interactivity and AI insights-helping people cook better, eat smarter and live healthier.

References

• ICMR Nutrition & Metabolic Health Study - Dietary Impact on Chronic Diseases
• National Family Health Survey (NFHS-5), Ministry of Health & Family Welfare
• NITI Aayog - Preventive Healthcare & Nutrition Strategy for India
• WHO Healthy Eating & Non-Communicable Disease Guidelines
• Lancet Public Health - Effectiveness of Lifestyle Interventions
• Statista India Digital Health & Online Learning Trends
• EY-FICCI Digital Nutrition & Virtual Wellness Report

See all

Live Doctor consultation
Live Doctor Chat

Download Our App & Get Consultation from anywhere.

App Download
call icon for mobile number calling and whatsapp at secondmedic