Skip to content

Instantly share code, notes, and snippets.

@thedatajango
Last active October 8, 2024 01:21
Show Gist options
  • Select an option

  • Save thedatajango/3de5259eb9f365792e901fdf12771f10 to your computer and use it in GitHub Desktop.

Select an option

Save thedatajango/3de5259eb9f365792e901fdf12771f10 to your computer and use it in GitHub Desktop.

Revisions

  1. thedatajango revised this gist May 16, 2018. 1 changed file with 0 additions and 10 deletions.
    10 changes: 0 additions & 10 deletions Fraud_Detection_Complete.ipynb
    Original file line number Diff line number Diff line change
    @@ -1,15 +1,5 @@
    {
    "cells": [
    {
    "cell_type": "markdown",
    "metadata": {
    "collapsed": true
    },
    "source": [
    "# Credit Card Fraud Detection\n",
    "# Anomaly Detection Using Python"
    ]
    },
    {
    "cell_type": "code",
    "execution_count": 11,
  2. thedatajango revised this gist May 16, 2018. 1 changed file with 134 additions and 56 deletions.
    190 changes: 134 additions & 56 deletions Fraud_Detection_Complete.ipynb
    Original file line number Diff line number Diff line change
    @@ -7,23 +7,15 @@
    },
    "source": [
    "# Credit Card Fraud Detection\n",
    "# Anomaly Detection Using Python\n",
    "\n",
    "Let us take a credit card fraud dataset from Kaggle (https://www.kaggle.com/mlg-ulb/creditcardfraud/Data). The datasets contains transactions made by credit cards in September 2013 by European cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.\n",
    "\n",
    "It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'.\n",
    "\n",
    "Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.\n",
    "\n",
    "Before getting into problem solving, I did a research on how to handle imbalance datasets. Here are some of the techniques that I came across.\n",
    "1) Refer to Anomaly Detection Part (week 9) from Andrew NG's Machine learning course on Kaggle @ https://www.coursera.org/learn/machine-learning\n",
    "2) https://www.analyticsvidhya.com/blog/2017/03/imbalanced-classification-problem/"
    "# Anomaly Detection Using Python"
    ]
    },
    {
    "cell_type": "code",
    "execution_count": 11,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "import pandas as pd\n",
    @@ -66,7 +58,9 @@
    {
    "cell_type": "code",
    "execution_count": 12,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "def data_preparation(data):\n",
    @@ -88,7 +82,9 @@
    {
    "cell_type": "code",
    "execution_count": 13,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "def build_model_train_test(model,x_train,x_test,y_train,y_test):\n",
    @@ -148,7 +144,9 @@
    {
    "cell_type": "code",
    "execution_count": 14,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "def build_model_train(model,x_train,y_train):\n",
    @@ -179,7 +177,9 @@
    {
    "cell_type": "code",
    "execution_count": 15,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "def build_model_test(model,x_test,y_test):\n",
    @@ -202,7 +202,9 @@
    {
    "cell_type": "code",
    "execution_count": 16,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "def SelectThresholdByCV(probs,y):\n",
    @@ -250,7 +252,9 @@
    {
    "cell_type": "code",
    "execution_count": 17,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "def SelectThresholdByCV_Anomaly(probs,y):\n",
    @@ -300,7 +304,9 @@
    {
    "cell_type": "code",
    "execution_count": 18,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "def Print_Accuracy_Scores(y,y_pred):\n",
    @@ -312,7 +318,9 @@
    {
    "cell_type": "code",
    "execution_count": 19,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Loading Dataset\n",
    @@ -1003,7 +1011,9 @@
    {
    "cell_type": "code",
    "execution_count": 27,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Train Test split - By default train_test_split does STRATIFIED split based on label (y-value).\n",
    @@ -1013,7 +1023,9 @@
    {
    "cell_type": "code",
    "execution_count": 28,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Feature Scaling - Standardizing the scales for all x variables\n",
    @@ -1026,7 +1038,9 @@
    {
    "cell_type": "code",
    "execution_count": null,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": []
    },
    @@ -1079,7 +1093,9 @@
    {
    "cell_type": "code",
    "execution_count": 30,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Predict target variable (Y-label)\n",
    @@ -1284,7 +1300,9 @@
    {
    "cell_type": "code",
    "execution_count": 35,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Decision scores or Confidence scores - Is a measure of distance of that sample to decision boundary hyperplane\n",
    @@ -1294,7 +1312,9 @@
    {
    "cell_type": "code",
    "execution_count": 36,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Precision recall curve - Computes precision-recall pairs for different probability thresholds\n",
    @@ -1426,7 +1446,9 @@
    {
    "cell_type": "code",
    "execution_count": null,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": []
    },
    @@ -1442,7 +1464,9 @@
    {
    "cell_type": "code",
    "execution_count": 41,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "# #Use GridSearchCV to find the best parameters for RandomForest algorithm\n",
    @@ -1460,7 +1484,9 @@
    {
    "cell_type": "code",
    "execution_count": 42,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "# grid_search.best_estimator_"
    @@ -2461,7 +2487,9 @@
    {
    "cell_type": "code",
    "execution_count": 55,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Probability scores for each record for prediction\n",
    @@ -2537,7 +2565,9 @@
    {
    "cell_type": "code",
    "execution_count": null,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": []
    },
    @@ -2574,7 +2604,9 @@
    {
    "cell_type": "code",
    "execution_count": 58,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "genuine_data = cc_dataset[cc_dataset['Class']==0]\n",
    @@ -2584,7 +2616,9 @@
    {
    "cell_type": "code",
    "execution_count": 59,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "genuine_indexes = genuine_data.index\n",
    @@ -2634,7 +2668,9 @@
    {
    "cell_type": "code",
    "execution_count": 62,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "undersample_data = cc_dataset.iloc[undersample_indexes,:].values"
    @@ -2643,7 +2679,9 @@
    {
    "cell_type": "code",
    "execution_count": 63,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Split X & Y\n",
    @@ -2706,7 +2744,9 @@
    {
    "cell_type": "code",
    "execution_count": null,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": []
    },
    @@ -2779,7 +2819,9 @@
    {
    "cell_type": "code",
    "execution_count": 67,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Split X & Y\n",
    @@ -2834,7 +2876,9 @@
    {
    "cell_type": "code",
    "execution_count": 69,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Probability score for each record for prediction\n",
    @@ -3016,7 +3060,9 @@
    {
    "cell_type": "code",
    "execution_count": 75,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Probability scores for each record for prediction\n",
    @@ -3139,7 +3185,9 @@
    {
    "cell_type": "code",
    "execution_count": null,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": []
    },
    @@ -3164,7 +3212,9 @@
    {
    "cell_type": "code",
    "execution_count": 80,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "os = SMOTE(random_state=0)"
    @@ -3245,7 +3295,9 @@
    {
    "cell_type": "code",
    "execution_count": 83,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Probability scores for each record for prediction\n",
    @@ -3429,7 +3481,9 @@
    {
    "cell_type": "code",
    "execution_count": 89,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Probability scores for each record for prediction\n",
    @@ -3538,7 +3592,9 @@
    {
    "cell_type": "code",
    "execution_count": null,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": []
    },
    @@ -3587,7 +3643,9 @@
    {
    "cell_type": "code",
    "execution_count": 94,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Loading Dataset\n",
    @@ -3604,7 +3662,9 @@
    {
    "cell_type": "code",
    "execution_count": 95,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "cc_dataset.drop(labels = ['V28','V27','V26','V25','V24','V23','V22','V20','V15','V13','V8','Time'], axis = 1, inplace=True)"
    @@ -3709,7 +3769,9 @@
    {
    "cell_type": "code",
    "execution_count": 99,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Example for applying log tranformation \n",
    @@ -3720,7 +3782,9 @@
    {
    "cell_type": "code",
    "execution_count": 100,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Example for applying log tranformation\n",
    @@ -3731,7 +3795,9 @@
    {
    "cell_type": "code",
    "execution_count": 101,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "def estimateGaussian(data):\n",
    @@ -3743,7 +3809,9 @@
    {
    "cell_type": "code",
    "execution_count": 102,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "def MultivariateGaussianDistribution(data,mu,sigma):\n",
    @@ -3755,7 +3823,9 @@
    {
    "cell_type": "code",
    "execution_count": 103,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "genuine_data = cc_dataset[cc_dataset['Class']==0]\n",
    @@ -3807,7 +3877,9 @@
    {
    "cell_type": "code",
    "execution_count": 106,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Split Fraud records into Cross Validation & Test (50:50 ratio)\n",
    @@ -3878,7 +3950,9 @@
    {
    "cell_type": "code",
    "execution_count": 110,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#StandardScaler – Feature scaling is not required since all the features are already standardized via PCA\n",
    @@ -3891,7 +3965,9 @@
    {
    "cell_type": "code",
    "execution_count": 111,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Find out the parameters Mu and Covariance for passing to the probability density function\n",
    @@ -3926,7 +4002,9 @@
    {
    "cell_type": "code",
    "execution_count": 113,
    "metadata": {},
    "metadata": {
    "collapsed": true
    },
    "outputs": [],
    "source": [
    "#Calculate the probabilities for cross validation and test records by passing the mean and co-variance matrix derived from train data\n",
    @@ -4124,7 +4202,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
    "version": "3.6.5"
    "version": "3.6.3"
    }
    },
    "nbformat": 4,
  3. thedatajango created this gist May 15, 2018.
    4,132 changes: 4,132 additions & 0 deletions Fraud_Detection_Complete.ipynb
    4,132 additions, 0 deletions not shown because the diff is too large. Please use a local Git client to view these changes.