10%

Try : Insurtech, Application Development

Edtech(5)

Events(34)

Interviews(10)

Life@mantra(11)

Logistics(1)

Strategy(14)

Testing(8)

Android(45)

Backend(29)

Dev Ops(2)

Enterprise Solution(22)

Frontend(28)

iOS(40)

Javascript(13)

Augmented Reality(17)

Customer Journey(12)

Design(13)

User Experience(34)

AI in Insurance(31)

Insurtech(59)

Product Innovation(37)

Solutions(15)

E-health(3)

HealthTech(8)

mHealth(3)

Telehealth Care(1)

Telemedicine(1)

Artificial Intelligence(109)

Bitcoin(7)

Blockchain(18)

Cognitive Computing(7)

Computer Vision(8)

Data Science(14)

FinTech(44)

Intelligent Automation(26)

Machine Learning(46)

Natural Language Processing(13)

Insurtechs are Thriving with Machine Learning. Here’s how.

Modern Insurance is only around 250 years old, about when the necessary statistical and mathematical tools to underwrite a business venture came to be. But statistical models, even the most advanced ones, need a very specific type of enriched data-diet for it to work optimally. Since then, the industry has always had to rely on data for ensuring its long financial health. For insurers to take on considerable risk, regardless of size, it draws on the reassurance of statistically-sound data that underpins the coverage needed (for issuance) to a fixed number. This ‘number’ will influence the amount of coverage (or claim) provided to the insuree and consequently the amount of premium to be collected.

Such is the reliance on data, that even the slightest erroneous mistake in the underwriter’s predictions could bankrupt, at times, even the economy. We’ve seen it before — when banks took on unqualified risks and approved subprime mortgage loans to borrowers with poor credit, creating the imploding housing bubble of ‘08.

The nature of risk simply evolves and devolves; while Insurers learn progressively with each individual case, adsorbing enormous amounts of data into their carefully crafted risk-models. These models then naturally aid in the manual effort of several hundred data scientists (in the case of large insurers) poring over immense amounts of psychographic, behavioral and environmental attributes for evaluating an entity’s risk profile. Yet, even with these measures, the risk is unquantifiable if the data scientist doesn’t have a large or clear enough picture to make sense of all the inbound information. 

In the age of machine intelligence, data is prime fodder for these advanced algorithms. They are designed to thrive on large datasets — in fact the larger the size, the better the system learns. How could it not? An AI system is decidedly 1000x faster than human computing, raising accuracy levels to near perfection and improving straight-through processing to nearly one in every two decisions made without human intervention, today.


Source: Accenture Report — Machine Learning in Insurance

20.4 billion things will be connected by 2020 creating an unprecedented level of data handling & insight derivation capacity, as BFSI companies alone will spend US$25 billion on AI in 2020 (as reported by IDC research). Since 2012, more than $10 billion has been invested in insurtechs.

For 2020 and beyond, customers will come to expect better personalization from their insurance policies, especially millennials and younger. While the incumbent, slow-moving giants of traditional insurance should surprise no one as being the last to innovate — new insurtechs like Flyreel are changing the paradigm by piloting Machine Learning projects that directly translates to critical business goals.

According to McKinsey, digital insurers are already achieving better financial and efficient go-to-market results compared to traditional players.

Here are three ways, insurtechs are gaining ground with Machine Learning (specifically where learning from data is involved):

  1. Risk Prediction
    Predicting and evaluating risk is insurance’ oldest use case, and research reveals it will continue to be so. With ML and advanced algorithms, insurers can process big data from multiple data points such as policy contracts, claims data, weather parameters, crime data, IoT and sensor data.
    By Analysing existing data, identifying anomalies, tracking recurring usage patterns and then delivering accurate predictions and diagnosis through vertically-tuned algorithms — ML-based platforms can identify risk ratios and risk profiles that enable insurers to customize policies for individual customers in real-time. This differs from ‘off-the-shelf’ platforms which can only be utilized to solve a narrow set of problems.

  2. Customer Lifetime Value (CLV) Prediction
    CLV is a complex metric that represents the value of a customer to an organization as the difference between the revenue gained and expenses incurred – all projected onto the entire relationship with a customer, including the future.
    Insurers can now predict CLV using customer behavior data that allows them to assess the customer’s potential profitability for the insurer. Behavior-based learning models can be applied to forecast retention or cross-buying, all critical factors in the company’s future income. ML tools also help insurers to predict the likelihood of particular customer behavior – for example, their maintenance of the policies or surrender.

  3. Personalization Insights Engine
    User data from AI, machine learning and behavioral and social sciences can provide actionable insights in real time. For example, simulation and learning capabilities allow companies to discover new customer groups, to help companies personalize customer engagement, risk assessment, and forecasting by combining data from multiple sources.
    A common challenge is capturing data from multiple sources and turning the data into insights that can inform business decisions across many functions. With machine learning, insurers will be able to underwrite, adjust customer journeys, resolve claims and adapt offerings.

ML-based solutions bring back real value to insurers — either delivered as a standalone product or as a part of an embedded process/service. The key for insurers is to pilot ML projects of smaller scale that can bring about cost and time savings across the organization almost immediately and then improve in easier iterative sprints for more future-ready permanence, rather than taking on the task of a complete enterprise makeover from day one!

For more information about how we can help enterprises begin their ML transformation, reach us on hello@mantralabsglobal.com

Cancel

Knowledge thats worth delivered in your inbox

Implementing a Clean Architecture with Nest.JS

4 minutes read

This article is for enthusiasts who strive to write clean, scalable, and more importantly refactorable code. It will give an idea about how Nest.JS can help us write clean code and what underlying architecture it uses.

Implementing a clean architecture with Nest.JS will require us to first comprehend what this framework is and how it works.

What is Nest.JS?

Nest or Nest.JS is a framework for building efficient, scalable Node.js applications (server-side) built with TypeScript. It uses Express or Fastify and allows a level of abstraction to enable developers to use an ample amount of modules (third-party) within their code.

Let’s dig deeper into what is this clean architecture all about. 

Well, you all might have used or at least heard of MVC architecture. MVC stands for Model, View, Controller. The idea behind this is to separate our project structure into 3 different sections.

1. Model: It will contain the Object file which maps with Relation/Documents in the DB.

2. Controller: It is the request handler and is responsible for the business logic implementation and all the data manipulation.

3. View: This part will contain files that are concerned with the displaying of the data, either HTML files or some templating engine files.

To create a model, we need some kind of ORM/ODM tool/module/library to build it with. For instance, if you directly use the module, let’s say ‘sequelize’, and then use the same to implement login in your controller and make your core business logic dependent upon the ‘sequelize’. Now, down the line, let’s say after 10 years, there is a better tool in the market that you want to use, but as soon as you replace sequelize with it, you will have to change lots of lines of code to prevent it from breaking. Also, you’ll have to test all the features once again to check if it’s deployed successfully or not which may waste valuable time and resource as well. To overcome this challenge, we can use the last principle of SOLID which is the Dependency Inversion Principle, and a technique called dependency injection to avoid such a mess.

Still confused? Let me explain in detail.

So, what Dependency Inversion Principle says in simple words is, you create your core business logic and then build dependency around it. In other words, free your core logic and business rules from any kind of dependency and modify the outer layers in such a way that they are dependent on your core logic instead of your logic dependent on this. That’s what clean architecture is. It takes out the dependency from your core business logic and builds the system around it in such a way that they seem to be dependent on it rather than it being dependent on them.

Let’s try to understand this with the below diagram.

Source: Clean Architecture Cone 

You can see that we have divided our architecture into 4 layers:

1. Entities: At its core, entities are the models(Enterprise rules) that define your enterprise rules and tell what the application is about. This layer will hardly change over time and is usually abstract and not accessible directly. For eg., every application has a ‘user’. What all fields the user should store, their types, and relations with other entities will comprise an Entity.

2. Use cases: It tells us how can we implement the enterprise rules. Let’s take the example of the user again. Now we know what data to be operated upon, the use case tells us how to operate upon this data, like the user will have a password that needs to be encrypted, the user needs to be created, and the password can be changed at any given point of time, etc.

3. Controllers/Gateways: These are channels that help us to implement the use cases using external tools and libraries using dependency injection.

4. External Tools: All the tools and libraries we use to build our logic will come under this layer eg. ORM, Emailer, Encryption, etc.

The tools we use will be depending upon how we channel them to use cases and in turn, use cases will depend upon the entities which is the core of our business. This way we have inverted the dependency from outwards to inwards. That’s what the Dependency Inversion Principal of SOLID implies.

Okay, by now, you got the gist of Nest.JS and understood how clean architecture works. Now the question arises, how these two are related?  

Let’s try to understand what are the 3 building blocks of Nest.JS and what each of them does.

  1. Modules: Nest.JS is structured in such a way that we can treat each feature as a module. For eg., anything which is linked with the User such as models, controllers, DTOs, interfaces, etc., can be separated as a module. A module has a controller and a bunch of providers which are injectible functionalities like services, orm, emailer, etc.
  1. Controllers: Controllers in Nest.JS are interfaces between the network and your logic. They are used to handle requests and return responses to the client side of the application (for example, call to the API).
  1. Providers (Services): Providers are injectable services/functionalities which we can inject into controllers and other providers to provide flexibility and extra functionality. They abstract any form of complexity and logic.

To summarize,

  • We have controllers that act as interfaces (3rd layer of clean architecture)
  • We have providers which can be injected to provide functionality (4th layer of clean architecture: DB, Devices, etc.)
  • We can also create services and repositories to define our use case (2nd Layer)
  • We can define our entities using DB providers (1st Layer)

Conclusion:

Nest.JS is a powerful Node.JS framework and the most well-known typescript available today. Now that you’ve got the lowdown on this framework, you must be wondering if we can use it to build a project structure with a clean architecture. Well, the answer is -Yes! Absolutely. How? I’ll explain in the next series of this article. 

Till then, Stay tuned!

About the Author:

Junaid Bhat is currently working as a Tech Lead in Mantra Labs. He is a tech enthusiast striving to become a better engineer every day by following industry standards and aligned towards a more structured approach to problem-solving. 


Read our latest blog: Golang-Beego Framework and its Applications

Cancel

Knowledge thats worth delivered in your inbox

Loading More Posts ...
Go Top