fbpx

Deep Technology sp. z o.o. | Nowy Świat 33/13 | 00-029 Warsaw | Poland

walidacja modeli

Model validation

When scoring or rating models have been made, the next step should be their calibration and validation – both quantitative and qualitative. Why? Because before you deploy the model to your organization, you need to check that it works correctly.

When you have a machine learning model, it is important to check its accurate forecasts and verify that the model results can be used in actual applications. Thanks to this, your machine learning model is of high quality and you avoid errors.

Why does your company need model validation?

Imagine that you create a product and introduce it to the market, but without checking if anyone wants to use or buy it at all. Based on the assumptions themselves, there is no certainty that your project will succeed. Startups before the product launches, test its usability. They implement the minimum product version (MVP) and verify that customers are interested in it. Usually, they only apply for financing and expand the product’s functionality.
What would happen otherwise? They would devote a lot of time and financial resources to preparing the final version of the product, and as a result, it could turn out that nobody wants to use it. The situation is the same with the validation of scoring or rating models. So it’s important to verify their usability.

What do we check during model validation?

When validating the model, we develop a number of tests, including statistical ones. We assess the correctness of the classifier by describing its sensitivity and specificity using the ROC curve. We perform the Kolmogorov-Smirnov test, which compares variable distributions. We calculate error matrices, assessing the quality of the model, and check the criteria of good fit of the model. We estimate predictive properties to see if the models reflect particular issues well. And these are just a few examples of actions that we will take to validate your model.
All this adds up to the factual and professional validation of the model, so you can be sure that it will work as it should and will not expose your organization to errors. We will check if your model works correctly in a business environment.

good fit criteria of the model

capturing poor separation

methods for assessing discriminatory ability

confusion matrix, misclassification, sensitivity and specificity

classification accuracy assessment: cross-validation (leave-one-out, k-fold)

CAP and AR measures

ROC, AUROC measures

divergence (Fisher separation)

Kolmogorov–Smirnov test

entropy measures: WoE, IV, CIER, MIE