10%

Try : Insurtech, Application Development

Edtech(5)

Events(31)

Interviews(10)

Life@mantra(10)

Logistics(1)

Strategy(14)

Testing(7)

Android(43)

Backend(28)

Dev Ops(2)

Enterprise Solution(20)

Frontend(28)

iOS(39)

Javascript(13)

AI in Insurance(26)

Insurtech(57)

Product Innovation(34)

Solutions(13)

Augmented Reality(7)

Customer Journey(7)

Design(6)

User Experience(21)

Artificial Intelligence(95)

Bitcoin(7)

Blockchain(14)

Cognitive Computing(7)

Computer Vision(6)

Data Science(14)

FinTech(41)

Intelligent Automation(25)

Machine Learning(43)

Natural Language Processing(10)

E-health(2)

HealthTech(5)

mHealth(3)

Telehealth Care(1)

Healthcare Chatbots: Innovative, Efficient, and Low-cost Care

A new report from Juniper Research has found that annual cost savings derived from the adoption of chatbots in healthcare will reach $3.6 billion (€3.04 billion) globally by 2022, up from an estimated $2.8 million (€2.36 million) in 2017. This growth will average 320% per annum, as AI (artificial intelligence) powered chatbots will drive improved customer experiences for patients.- IoTnow

Experts believe that medical chatbots will play an important role in the healthcare industry soon. While the world is facing the problem of information overload, bots can help in segregating good and bad/right and wrong data. Healthcare chatbots solve the most pressing medicine industry problems-

  1. Accessibility: The human workforce might not be available in every vital situation. But bots are. Thus making appointments and scheduling simple.
  2. SOS: Healthcare chatbots can also act as an alarm in the instances of life-threatening symptoms as described by the patients. 
  3. Customization: Bots with NLP capabilities can understand voice and text-based queries efficiently. This is especially helpful for people with visual or speech impairments. 
  4. Integrations: Chatbots are compatible with IoT devices like Google Home, Alexa, etc. 
  5. Personalization: Patient-specific bots can place a call, advise first-aid, and even send notifications/messages to physicians. 

Before we delve deep into medical chatbots, let’s quickly look at what exactly is a chatbot.

Chatbots Brief History

The first-ever computer program that could communicate with humans was Eliza, developed by MIT in 1966. Subsequently, with programs passing Turing tests, e-commerce, messaging, healthcare, and other enterprises indicated a deep interest in using chatbots.

A chatbot is an artificial intelligence program that can interact, respond, advise, assist, and converse with humans. It can mimic a two-way communication between two individuals. In the initial phases of chatbot implementation, tasks like — scheduling an appointment and answering fundamental queries, were accomplished. Today, the scope of chatbots is much broader with recommendations, references, diagnosis, and even preliminary treatments.

Snapshot of Your.MD healthcare chatbot illustrating the services it provides

How are Medical Chatbots Reshaping the Healthcare Industry?

Chatbots can help in improving Patients’ engagement and experiences with the hospital/physician. The following are the proven benefits of healthcare chatbots.

Accessible Anytime and Anywhere

Today, it is possible to embed chatbots on websites, mobile apps, and even third-party apps like Facebook and WhatsApp. There’s no need for downloading them explicitly and registration/activation. For instance, Religare, a leading health insurer has its chatbot integrated into WhatsApp

Generally, hospitals and other healthcare organizations provide chatbot in-built in their app. Apart from usual support, it also helps to secure your medical records in one place. This record can also be shared with the doctor whenever it is required.

There are mental health and therapy chatbots available that provide continuous support to patients with mental illness, depression, or sleeplessness. Wysa — a mental therapy chatbot is one such example. 

Chatbots for SOS (Emergency Alarms)

During the instances of life-threatening events/symptoms, chatbots can help in raising alarm automatically. For example, if a person who complains about chest pain does not respond to messages within a stipulated time, an emergency call made by the bot to healthcare/family/friends might help in attending the needful.

Customization

Organizations can customize chatbots to decipher different languages, voice accents, and text patterns. With different modes of conversations, chatbots can simplify communications. For instance, for a person with a visual disability, computer vision could be altered. For people with speech problems, chatbots with NLP capabilities can be beneficial. 

Another remarkable development in chatbots that we’re going to witness soon is that of Emotion AI. Soon the bots would be able to understand the user’s emotions based on text, voice, or sentence structure. This feature will be a great help for understanding the sentiments of people suffering from depression or any other kind of mental illness.

Integration with Different Platforms and Personalization

Because of their omnichannel nature, one can easily integrate bots with the web, mobile, or third-party apps, and APIs. They can even work with voice assistants like Alexa and Google Home. With seamless integration, a bot can place a call, advise you on first aid, analyze your medical history and send notifications to your doctor.

Voice-enabled chatbots and multilingual chatbots are disrupting the way customers engage with chatbots. Voice-enabled chatbots increase accessibility and speed up the query process, as the user does not have to type. More than 70% of Indians face challenges while using English keyboard, and approximately 60% of them find the language to be the key barrier in adopting digital tools. Vernacular language support can personalize the services for native users and make the whole process of maintaining medical records a lot easier. For instance, to automate help desk tasks and respond to customer queries, voice-driven chatbots can be integrated.

Indian chatbots like Hitee (designed for Indian SMEs) support several Indian regional languages including Hindi, Tamil, Bengali, Telugu, Gujarati, Kannada and Malayalam.

Video conferencing chatbots can be used by private clinics and healthcare practitioners to converse with their patients.

5 Popular AI Healthcare Chatbots

  1. mfine: It is a digital health platform for on-demand healthcare services. It offers online consultation (text, audio, video), medicine delivery, follow-ups, and patient record management services.
  2. Wysa: Developed by Touchkin, Wysa is AI-powered stress, depression, and anxiety therapy chatbot. It helps people practice CBT (Cognitive Behavioral Therapy) and DBT (Dialectical behavior therapy) techniques to build resilience. For additional support, it connects people with real human coaches.
  3. Mediktor: It is an accurate AI-based symptoms checker with great NLP (Natural Language Processing) capabilities.
  4. ZINI: It provides every user with a Unique Global Health ID that can be used for managing one’s healthcare information all over the world. It also creates an emergency medical profile for the patient for urgent medical requirements.
  5. Your.MD: It provides personalized information, guidance, and healthcare support. With an in-built symptom checker, vast medical database, health plans, and journals, it is a certified application for digital healthcare support.

To know more about how AI is innovating the healthcare industry in bringing the digital health era, check out our webinar on ‘Digital Health Beyond COVID-19: Bringing the Hospital to the Customer’ on our YouTube channel.

We specialize in building custom AI-powered chatbots specific to your business requirements. Feel free to drop us a word at hello@mantralabsglobal.com to know more.

Related articles-

Cancel

Knowledge thats worth delivered in your inbox

Tabular Data Extraction from Invoice Documents

5 minutes, 12 seconds read

The task of extracting information from tables is a long-running problem statement in the world of machine learning and image processing. Although the latest accomplishments in the field of deep learning have seen a lot of success, tabular data extraction still remains a challenge due to the vast amount of ways in which tables are represented both visually and structurally. Below are some of the examples: 

Fig. 1

Fig. 2

Fig. 3

Fig. 4

Fig. 5

Invoice Documents

Many companies process their bills in the form of invoices which contain tables that hold information about the items along with their prices and quantities. This information is generally required to be stored in databases while these invoices get processed.

Traditionally, this information is required to be hand filled into a database software however, this approach has some drawbacks:

1. The whole process is time consuming.

2. Certain errors might get induced during the data entry process.

3. Extra cost of manual data entry.

 An invoice automation system can be deployed to address these shortcomings. The idea is to upload the invoice document and the system will read and generate the tabular information in the digital format making the whole process faster and more cost-effective for companies.

Fig. 6

Fig. 6 shows a sample invoice that contains some regular invoice details such as Invoice No, Invoice Date, Company details, and two tables holding transaction information. Now, our goal is to extract the information present in the two tables.

Tabular Information

The problem of extracting tables from invoices can be condensed into 2 main subtasks.

1. Table Detection

2. Tabular Structure Extraction.

 What is Table Detection?

 Table Detection is the process of identifying and locating tables that are present in a document, usually an image. There are multiple ways to detect tables in an image. Some of the approaches make use of image processing toolkits like OpenCV while some of the other approaches use statistical models on features extracted from the documents such as Text Position and Text Characteristics. Recently more deep learning approaches have been used to detect tables using trained neural networks similar to the ones used in Object Detection.

What is Table Structure Extraction?

Table Structure Extraction is the process of extracting the tabular information once the boundaries of the table are detected through Table Detection. The information within the rows and columns is then extracted and transferred to the desired format, usually CSV or Excel file.

Table Detection using Faster RCNN

Faster RCNN is a neural network model that comes from the RCNN family. It is the successor of Fast RCNN created by Ross Girshick in 2015. The name Faster RCNN is to signify an improvement over the previous model both in terms of training speed and detection speed. 

To read more about the model framework, one can access the paper Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.

 There are many other object detection model architectures that are available for use today. Each model comes with certain advantages and disadvantages in terms of prediction accuracy, model parameter size, inference speed, etc.

For the task of detecting tables in invoice documents, we will select the Faster RCNN model with FPN(Feature Pyramid Network) as a feature extraction network. The model is pre-trained on the ImageNet corpus using ResNET 101 architecture. The ImageNet corpus is a public dataset that consists of more than 20,000 image categories of everyday objects.  We will therefore make use of a Pytorch framework to train and test the model.

The above mentioned model gives us a fast inference time and a high Mean Average Precision. It is preferred for cases where a quick real time detection is desired.

First, the model is to be trained using public datasets for Table Detection such as Marmot and UNLV datasets. Next, we further fine-tune the model with our custom labeled dataset. For the purpose of labeling, we will follow the COCO annotation format.

Once trained, the model displayed an accuracy close to 86% on our custom dataset. There are certain scenarios where the model fails to locate the tables such as cases containing watermarks and/or overlapping texts. Tables without borders are also missed in a few instances. However, the model has shown its ability to learn from examples and detect tables in multiple different invoice documents. 

Fig. 7

After running inference on the sample invoice from Fig 6, we can see two table boundaries being detected by the model in Fig 7. The first table gets detected with 100% accuracy and the second table is detected with 99% accuracy.

Table Structure Extraction

Once the boundaries of the table are detected by the model, an OCR (Optical Character Reader) mechanism is used to extract the text within the boundaries. The text is then processed using the information that is part of a unique table.

We were able to extract the correct structure of the table, including its headers and line items using logics derived from the invoices. The difficulty of this process depends on the type of invoice format at hand.

There are multiple challenges that one may encounter while building an algorithm to extract structure. Some of them are:

  1. The span of some table columns may overlap making it difficult to determine the boundaries between columns.
  2. The fonts and sizes present within tables may vary from one table to another. The algorithm should be able to accomodate for this variation.
  3. The tables might get split into two pages and detecting the continuation of a table might be challenging.

Certain deep learning approaches have also been published recently to determine the structure of a table. However, training them on custom datasets still remains a challenge. 

Fig 8

The final result is then stored in a CSV file and can be edited or stored according to one’s convenience as shown in Fig 8 which displays the first table information.

Conclusion

The deep learning approach to extracting information from structured documents is a step in the right direction. With high accuracy and low running time, the systems can only learn to perform better with more data. The recent and upcoming advancements in computer vision approaches have made processes such as invoice automation significantly accessible and robust.

About the author:

Prateek Sethi is a Data Scientist working at Mantra Labs. His work involves leveraging Artificial Intelligence to create data-driven solutions. Apart from his work he takes a keen interest in football and exploring the outdoors.

Further Reading:

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top
bot

May i help you?

bot shadow

Our Website is
Best Experienced on
Chrome & Safari

safari icon