IntelliDecision.ai User Manual

Product Document (Marketing & Internal Enablement)

1. Product Overview

IntelliDecision.ai is a no-code, enterprise-grade decision intelligence platform designed to help organisations build, evaluate, deploy, and govern predictive models and decision strategies at scale.

Built by Corestrat, IntelliDecision.ai eliminates the traditional complexity of machine learning and statistical modelling by embedding advanced AI, automated feature engineering, model optimisation, and decision logic into a guided, intuitive workflow. The platform enables both data scientists and business users to collaborate on building explainable, auditable, and production-ready decision systems, without writing a single line of code.

At its core, IntelliDecision.ai bridges the gap between analytics and action, transforming raw data into deployable business decisions.


2. Who IntelliDecision.ai Is For

IntelliDecision.ai is purpose-built for:

  • Financial Services & Fintech (credit risk, underwriting, fraud, collections)
  • Insurance (risk selection, pricing, claims triage)
  • Retail & E‑commerce (customer scoring, churn, offer optimisation)
  • Logistics & Supply Chain (delay risk, vendor risk, demand forecasting)
  • Enterprises adopting AI-driven decision automation

Key user personas include: - Risk & Analytics Teams - Business Analysts - Data Scientists - Product Managers - Compliance & Model Governance Teams - Technology & Platform Teams


3. Key Capabilities

  • No-code / low-code model development
  • End-to-end ML lifecycle management
  • Advanced data preprocessing & feature engineering
  • Automated and manual model building
  • Explainability via IV, WoE, SHAP, KS, Gini
  • API-based deployment
  • Auto documentation & audit readiness

4. Platform Architecture & Workflow

IntelliDecision.ai follows a structured six-stage decision intelligence pipeline:

  1. Data Ingestion & Project Setup
  2. Data Preparation & Feature Engineering
  3. Sampling & Target Definition
  4. Model Development & Comparison
  5. Model Evaluation & Explainability
  6. Decision Design, Simulation & Deployment

Each stage is fully integrated, traceable, and configurable.


5. Data Ingestion & Project Management

Project Creation

  • Create new projects or refine existing ones
  • Maintain multiple versions and assumptions
  • Import projects from different environments

Data Upload Options

  • File-based upload: CSV, Excel, Feather, Parquet, delimited text
  • Database ingestion: Databricks, MySQL, Snowflake
  • SQL-based data extraction

Advanced Dataflow Builder (ADB)

The Advanced Dataflow Builder enables complex data preparation using a visual, drag-and-drop canvas:

  • Multiple dataset ingestion
  • Horizontal joins (Inner, Left, Right, Outer)
  • Vertical joins (stacking datasets)
  • Aggregations (numerical & categorical)
  • Custom WHERE conditions with rule grouping
  • Python code execution via reusable functions
  • Training and validation dataset creation
  • Memory management at node level

This allows enterprise-grade data engineering—without external ETL tools.


6. Data Preprocessing & Management

Users can choose between: - Let AI Do It (fully automated preprocessing) - Do It Yourself (manual control)

Core Preprocessing Features

Row & Column Management - Remove duplicates, empty rows/columns - Identify uni-valued and all-distinct columns - Detect and handle duplicate columns

Column Data Type Management - Convert numerical ↔ categorical variables - Identify likely categorical or numerical candidates

Variable Treatment & Governance - Identify sensitive variables (e.g. age, gender) - Flag high-missing or low-information variables - Support fairness-aware modelling


7. Feature Engineering & Transformation

Feature Engineering Options

  • Code‑It‑Yourself Python feature creation
  • Variable picker for rapid coding
  • Two-way interaction creation

Variable Transformation

  • Automated (AI-selected best transformations)
  • Manual (individual or batch)
  • Retain original and/or transformed variables

Category Encoding

  • One-Hot Encoding
  • Frequency Encoding
  • Batch or variable-level control

Distribution Analysis

  • Interactive histograms for numerical variables
  • Adjustable binning
  • Categorical frequency visualisations
  • Outlier and percentile insights

8. Sampling & Target Definition

Target Variable Selection

  • Auto-identification of candidate target variables
  • Define positive vs negative outcome categories

Stratified Sampling

  • Default 70/30 train-test split
  • Customisable ratios
  • Ensures target distribution stability

9. Variable Insights, Binning & Selection

Information Value (IV) Analysis

  • Fine & final classing
  • Correlation assessment
  • Inferred relationship detection
  • Manual & IV-optimal binning (numeric & categorical)

Clustering (VarClus)

  • Cluster variables using WoE or original values
  • Variance retention or cluster count control
  • Automatic representative variable selection

Multicollinearity Management

  • Correlation matrix
  • VIF-based variable classification
  • Automated correlated variable pruning

Variable Lineage View

  • Full trace of variables added, removed, transformed
  • End-to-end transparency

10. Model Development & AutoML

Supported Model Types

  • Decision Trees
  • Logistic Regression
  • Random Forest
  • XGBoost
  • Neural Networks

Model Settings

  • Global parameters (node size, depth, IV, VIF)
  • Score scaling (Base Score, PDO, Odds)
  • Algorithm-specific hyperparameters

Auto Grow Trees

  • Automated tree growth
  • Manual split insertion
  • Node collapse & override
  • IV-guided split recommendations

AutoML (Model.ai)

  • One-click model training
  • Bayesian hyperparameter optimisation
  • Model-specific explainability

11. Model Comparison & Ensembling

  • Build up to 3 models
  • Compare using KS, Gini, AUC, F1
  • Traffic-light performance indicators
  • Model ensembling (averaging / weighted)
  • Final model selection

12. Model Evaluation & Explainability

Performance Metrics

  • KS & Gini (Train vs Test)
  • ROC & AUC
  • Sensitivity vs Specificity curves
  • Score distribution & bad rates

Explainability

  • Variable importance
  • Scorecards
  • SHAP values (tree & AI models)
  • Node-level transparency

13. Decision Design, Simulation & Deployment

Decision Simulation

  • Out-of-time (OOT) dataset testing
  • Cut-off simulations
  • Segment-level decisions
  • Reject inferencing

Deployment

  • Auto-generated REST APIs
  • Sample payload & responses
  • JSON-based scoring integration
  • Production-ready endpoints

14. Auto Documentation & Governance

  • Auto-generated model documentation
  • Variable definitions & transformations
  • Model assumptions & metrics
  • Audit-ready artefacts

This significantly reduces regulatory, compliance, and internal review effort.