{ "cells": [ { "cell_type": "markdown", "id": "9a321869", "metadata": {}, "source": [ "# Multi-Modal Lie Detection with GSPO-enhanced ReAct Reasoning\n", "\n", "This notebook demonstrates a multi-modal deception detection system that integrates multiple data sources (video, audio, text, and more) with an advanced reasoning framework. The system uses **GSPO-enhanced ReAct** reasoning, combining self-play reinforcement learning and a reasoning-action loop for improved decision-making. It emphasizes transparency, explainability, and ethical considerations in AI-driven lie detection." ] }, { "cell_type": "markdown", "id": "7c1c0192", "metadata": {}, "source": [ "## 1. Installation & Setup\n", "In this section, we install all required libraries and set up the environment.\n", "We'll use `pip` to install necessary packages and mount Google Drive to access datasets like the **Strawberry-Phi** deception dataset.\n", "\n", "#### Dependencies:\n", "- `torch` for deep learning model implementation (CNNs, LSTMs, transformers).\n", "- `transformers` for the text model and NLP tasks.\n", "- `opencv-python` for video processing (facial cues from images).\n", "- `librosa` for audio signal processing (extracting voice features).\n", "- `shap` and `lime` for explainable AI (interpret model decisions).\n", "- `scikit-learn` for evaluation metrics and possibly simple model components.\n", "- `ipywidgets` for interactive UI elements (uploading files, toggling options).\n", "\n", "We'll also mount Google Drive to load the **Strawberry-Phi** dataset for fine-tuning later." ] }, { "cell_type": "code", "execution_count": null, "id": "47f5d9af", "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "!pip install torch transformers opencv-python librosa shap lime scikit-learn ipywidgets\n", "\n", "# Mount Google Drive (if running in Colab)\n", "from google.colab import drive\n", "drive.mount('/content/drive')" ] }, { "cell_type": "markdown", "id": "4013300f", "metadata": {}, "source": [ "## 2. Project Overview\n", "**Multi-Modal Deception Detection** involves analyzing multiple data streams (like facial expressions, voice, text, and physiological signals) to determine if a subject is being deceptive. By combining modalities, we can improve accuracy since deceit often manifests through subtle cues in different channels​:contentReference[oaicite:0]{index=0}.\n", "\n", "**ReAct Reasoning Framework**: The ReAct (Reason + Act) framework interleaves logical reasoning with actionable operations. Instead of making predictions blindly, the system generates a reasoning trace (chain-of-thought) and uses that to inform its actions. This combined approach has been shown to improve decision-making and interpretability​:contentReference[oaicite:1]{index=1}. In practice, the agent will reason about the inputs (e.g., \"The subject is fidgeting and voice pitch is high, which often indicates stress\") and take actions (e.g., flag as potential lie) in a loop​:contentReference[oaicite:2]{index=2}.\n", "\n", "We also integrate **GSPO (Generative Self-Play Optimization)** with ReAct. GSPO uses self-play reinforcement learning: the model can simulate conversations or scenarios with itself to improve its lie-detection policy over time. This optional module lets the system learn from hypothetical scenarios, gradually refining its decision boundaries.\n", "\n", "#### Ethical AI Considerations:\n", "- **Transparency**: Our system provides reasoning traces and uses explainability tools (LIME, SHAP) so users can understand *why* a decision was made, addressing the \"lack of explainability\" concern in AI lie detection​:contentReference[oaicite:3]{index=3}.\n", "- **Bias Mitigation**: We must ensure the models do not overfit to demographic features (e.g., avoiding predictions based on gender or ethnicity). Training on diverse data and testing for bias helps create fair outcomes.\n", "- **Privacy**: All processing is done locally (no data is sent to external servers). We avoid storing sensitive personal data and only use the inputs for real-time analysis.\n", "- **Responsible Use**: Lie detection AI can be misused. This notebook is for research and educational purposes. Any real-world deployment should comply with legal standards and consider the potential for false positives/negatives.\n" ] }, { "cell_type": "markdown", "id": "c85d16a4", "metadata": {}, "source": [ "## 3. Model Implementations\n", "We implement separate models for each modality. Each model outputs a confidence score or decision about deception for its modality. Later, we'll fuse these results.\n", "\n", "The models will be simple prototypes (not fully trained) to illustrate the architecture:\n", "- **Vision Model**: A CNN for facial expression and micro-expression analysis from video frames or images.\n", "- **Audio Model**: An LSTM (or GRU) for vocal analysis, capturing stress or pitch anomalies in speech.\n", "- **Text Model**: A Transformer (e.g., BERT) for analyzing textual statements for linguistic cues of deception.\n", "- **Physiological Model (Optional)**: Placeholder for processing signals like heart rate or skin conductance.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "a577b2d2", "metadata": {}, "outputs": [], "source": [ "# Vision Model: CNN-based facial analysis\n", "import torch\n", "import torch.nn as nn\n", "import torch.nn.functional as F\n", "\n", "class VisionCNN(nn.Module):\n", " def __init__(self):\n", " super(VisionCNN, self).__init__()\n", " # Simple CNN: 2 conv layers + FC\n", " self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1)\n", " self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1)\n", " self.pool = nn.MaxPool2d(2, 2)\n", " # Assuming input images are 64x64, after 2 pools -> 16x16\n", " self.fc1 = nn.Linear(32 * 16 * 16, 2) # output: [lie_score, truth_score]\n", "\n", " def forward(self, x):\n", " x = self.pool(F.relu(self.conv1(x)))\n", " x = self.pool(F.relu(self.conv2(x)))\n", " x = x.view(x.size(0), -1)\n", " x = self.fc1(x)\n", " return x\n", "\n", "# Instantiate the vision model (untrained for now)\n", "vision_model = VisionCNN()\n", "print(vision_model)" ] }, { "cell_type": "code", "execution_count": null, "id": "6087ded2", "metadata": {}, "outputs": [], "source": [ "# Audio Model: LSTM-based vocal stress analysis\n", "import numpy as np\n", "import torch.nn.utils.rnn as rnn_utils\n", "\n", "class AudioLSTM(nn.Module):\n", " def __init__(self, input_size=13, hidden_size=32, num_layers=1):\n", " super(AudioLSTM, self).__init__()\n", " self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)\n", " self.fc = nn.Linear(hidden_size, 2) # 2 classes: lie or truth\n", "\n", " def forward(self, x, lengths=None):\n", " # x: batch of sequences (batch, seq_len, features)\n", " if lengths is not None:\n", " # pack padded sequence if lengths provided\n", " x = rnn_utils.pack_padded_sequence(x, lengths, batch_first=True, enforce_sorted=False)\n", " lstm_out, _ = self.lstm(x)\n", " if lengths is not None:\n", " lstm_out, _ = rnn_utils.pad_packed_sequence(lstm_out, batch_first=True)\n", " # Take output of last time step\n", " if lengths is not None:\n", " idx = (lengths - 1).view(-1, 1, 1).expand(lstm_out.size(0), 1, lstm_out.size(2))\n", " last_outputs = lstm_out.gather(1, idx).squeeze(1)\n", " else:\n", " last_outputs = lstm_out[:, -1, :]\n", " out = self.fc(last_outputs)\n", " return out\n", "\n", "# Instantiate the audio model (untrained placeholder)\n", "audio_model = AudioLSTM()\n", "print(audio_model)" ] }, { "cell_type": "code", "execution_count": null, "id": "bcd6bc3a", "metadata": {}, "outputs": [], "source": [ "# Text Model: Transformer-based deception analysis\n", "from transformers import AutoTokenizer, AutoModelForSequenceClassification\n", "import torch\n", "import torch.nn.functional as F\n", "\n", "# We use a pre-trained BERT model for binary classification (truth/lie)\n", "model_name = 'bert-base-uncased'\n", "tokenizer = AutoTokenizer.from_pretrained(model_name)\n", "text_model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)\n", "\n", "# Function to get prediction from text model\n", "def text_model_predict(text):\n", " inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True)\n", " outputs = text_model(**inputs)\n", " logits = outputs.logits\n", " probs = F.softmax(logits, dim=1)\n", " # probs is a tensor of shape (batch_size, 2)\n", " prob_np = probs.detach().cpu().numpy()\n", " return prob_np\n", "\n", "# Example usage (with dummy text)\n", "example_text = \"I absolutely did not take the money.\" # a deceptive statement example\n", "probs = text_model_predict([example_text])\n", "print(f\"Predicted probabilities (lie/truth) for example text: {probs}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "0af87b99", "metadata": {}, "outputs": [], "source": [ "# Physiological Model (Optional): Placeholder for biometric data analysis\n", "# Example of physiological signals: heart rate, skin conductance, blood pressure, etc.\n", "# We'll create a simple placeholder class that could be extended for real sensor input.\n", "\n", "class PhysiologicalModel:\n", " def __init__(self):\n", " # No actual model, just a placeholder\n", " self.name = 'PhysioModel'\n", " def predict(self, data):\n", " # data could be a dictionary of sensor readings\n", " # Here we return a dummy neutral prediction\n", " return np.array([0.5, 0.5]) # equal probability of lie/truth\n", "\n", "physio_model = PhysiologicalModel()\n", "print(\"Physiological model ready (placeholder):\", physio_model.name)" ] }, { "cell_type": "markdown", "id": "bd3fe080", "metadata": {}, "source": [ "## 4. GSPO Integration\n", "Here we integrate **Generative Self-Play Optimization (GSPO)** to enhance the model's decision-making through reinforcement learning. In GSPO, the system can create simulated scenarios and learn from them (like an agent playing against itself to improve skill).\n", "\n", "- **Self-Play Reinforcement Learning**: The model (as an agent) plays both roles in a deception scenario (questioner and responder). For example, it might simulate asking a question and then answering either truthfully or deceptively. The agent then tries to predict deception on these simulated answers, receiving a reward for correct detection. Over many iterations, this self-play helps the agent refine its policy for detecting lies.\n", "- This approach is inspired by how game-playing AIs train via self-play (e.g., AlphaGo Zero using self-play to surpass human performance). It allows the model to explore a wide range of scenarios beyond the initial dataset.\n", "\n", "- **Optional Learning Toggle**: We implement GSPO in a modular way. Users can turn this self-play learning on or off (for example, to compare performance with/without reinforcement learning). By default, the system won't do self-play unless explicitly enabled, to avoid long training times in this demo.\n", "\n", "- **Fine-Tuning with Strawberry-Phi Dataset**: We incorporate a fine-tuning phase using the `strawberry-phi` dataset, which is assumed to contain recorded deception instances (possibly multi-modal). Fine-tuning on real or richly simulated data like Strawberry-Phi ensures the models align better with actual deception cues.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "228f6b87", "metadata": {}, "outputs": [], "source": [ "# GSPO Self-Play Reinforcement Learning (simplified simulation)\n", "import random\n", "\n", "class SelfPlayAgent:\n", " def __init__(self, detector_model):\n", " self.model = detector_model # could be a combined model or policy\n", " self.learning = False\n", " self.training_history = []\n", "\n", " def enable_learning(self, flag=True):\n", " self.learning = flag\n", "\n", " def simulate_scenario(self):\n", " \"\"\"Simulate a deception scenario. Returns (input_data, is_deceptive).\"\"\"\n", " # For simplicity, random simulation: generate a random outcome\n", " # In practice, this could use a generative model to create realistic scenarios\n", " is_deceptive = random.choice([0, 1]) # 0 = truth, 1 = lie\n", " simulated_data = {\n", " 'video': None, # no actual video in this simulation\n", " 'audio': None,\n", " 'text': \"simulated statement\",\n", " 'physio': None\n", " }\n", " return simulated_data, is_deceptive\n", "\n", " def train_self_play(self, episodes=5):\n", " if not self.learning:\n", " print(\"Self-play learning is disabled. Skipping training.\")\n", " return\n", " for ep in range(episodes):\n", " data, truth_label = self.simulate_scenario()\n", " # Here we would run the detection model on the simulated data\n", " # and get a prediction (e.g., 1 for lie, 0 for truth)\n", " # We'll simulate prediction randomly for this demo:\n", " pred_label = random.choice([0, 1])\n", " reward = 1 if pred_label == truth_label else -1\n", " # In a real scenario, use this reward to update model (e.g., policy gradient)\n", " self.training_history.append(reward)\n", " print(f\"Episode {ep+1}: truth={truth_label}, pred={pred_label}, reward={reward}\")\n", "\n", "# Initialize a self-play agent (using text model as base for simplicity)\n", "agent = SelfPlayAgent(text_model)\n", "agent.enable_learning(flag=False) # Disabled by default\n", "agent.train_self_play(episodes=3)" ] }, { "cell_type": "code", "execution_count": null, "id": "4615c03c", "metadata": {}, "outputs": [], "source": [ "# Fine-tuning with Strawberry-Phi dataset (placeholder)\n", "import pandas as pd\n", "phi_data = None\n", "try:\n", " # Attempt to load JSONL\n", " phi_data = pd.read_json('/content/drive/MyDrive/strawberry-phi.jsonl', lines=True)\n", "except Exception:\n", " try:\n", " phi_data = pd.read_parquet('/content/drive/MyDrive/strawberry-phi.parquet')\n", " except Exception as e:\n", " print(\"Strawberry-Phi dataset not found. Please upload it to Google Drive.\")\n", "\n", "if phi_data is not None:\n", " print(\"Strawberry-Phi data loaded. Rows:\", len(phi_data))\n", " # TODO: process the dataset, e.g., extract features, train models\n", "else:\n", " print(\"Proceeding without Strawberry-Phi fine-tuning.\")" ] }, { "cell_type": "markdown", "id": "8660904a", "metadata": {}, "source": [ "## 5. Fusion Model\n", "After obtaining results from each modality-specific model, we need to combine them into a final decision. This is handled by a **Fusion Model** or strategy.\n", "\n", "Common fusion approaches:\n", "- **Majority Voting**: Each modality votes truth or lie, and the majority wins. This is simple and robust to one model's errors.\n", "- **Weighted Ensemble**: Assign weights to each modality based on confidence or accuracy, then compute a weighted sum of lie probabilities.\n", "- **Learned Fusion (Meta-Model)**: Train a separate classifier that takes each model's output (or confidence) as input features and outputs the final decision. This could be a small neural network or logistic regression trained on a validation set.\n", "\n", "For our system, we'll implement a simple weighted approach. We assume each model outputs a probability of deception (lie). We'll average these probabilities (or give higher weight to modalities we trust more) and then apply a threshold.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "f8000823", "metadata": {}, "outputs": [], "source": [ "# Fusion function for combining modality outputs\n", "def fuse_outputs(results, weights=None):\n", " \"\"\"\n", " results: list of dictionaries with 'lie_score' or probabilities for lie from each modality.\n", " weights: optional list of weights for each modality.\n", " returns: final decision ('lie' or 'truth') and combined score.\n", " \"\"\"\n", " if weights is None:\n", " weights = [1] * len(results)\n", " total_weight = sum(weights)\n", " # weighted sum of lie probabilities\n", " combined_score = 0.0\n", " for res, w in zip(results, weights):\n", " # if res is a probability or has 'lie' key\n", " if isinstance(res, dict):\n", " lie_prob = res.get('lie') or res.get('lie_score') or (res[1] if isinstance(res, (list, tuple, np.ndarray)) else res)\n", " else:\n", " lie_prob = float(res)\n", " combined_score += w * lie_prob\n", " combined_score /= total_weight\n", " decision = 'lie' if combined_score >= 0.5 else 'truth'\n", " return decision, combined_score\n", "\n", "# Example: fuse dummy outputs from the models\n", "vision_out = {'lie': 0.7, 'truth': 0.3}\n", "audio_out = {'lie': 0.4, 'truth': 0.6}\n", "text_out = {'lie': 0.9, 'truth': 0.1}\n", "physio_out = {'lie': 0.5, 'truth': 0.5}\n", "final_decision, score = fuse_outputs([vision_out, audio_out, text_out, physio_out])\n", "print(f\"Final decision: {final_decision} (lie probability = {score:.2f})\")" ] }, { "cell_type": "markdown", "id": "85e09344", "metadata": {}, "source": [ "## 6. ReAct Agent\n", "The ReAct agent is responsible for the reasoning-action loop. It should mimic how an expert would analyze evidence step-by-step, and justify each conclusion with reasoning before making the next move (action). Our ReAct agent will use the outputs from the above models and reason about them interactively.\n", "\n", "Key aspects of our ReAct implementation:\n", "- The agent will gather observations from each modality (e.g., *\"Vision model sees nervous facial expression.\"*).\n", "- It will reason about these observations (*\"Nervous face + high voice pitch = likely stress from lying\"*).\n", "- Based on reasoning, it may decide an action, such as concluding \"lie\" or maybe asking for more input if uncertain.\n", "- The loop continues if more reasoning or data is needed. For simplicity, our agent will do one pass of reasoning and then decide.\n", "\n", "The agent's decision-making process (as pseudocode):\n", "1. **Observe**: Get inputs from modalities.\n", "2. **Reason**: Form a narrative like \"The text content contradicts known facts and the speaker's voice is shaky.\".\n", "3. **Act**: Decide on an output (lie or truth) or ask for more data if needed.\n", "4. **Explain**: Provide the reasoning trace to the user for transparency.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "e8391df5", "metadata": {}, "outputs": [], "source": [ "# ReAct Agent Implementation (simplified reasoning loop)\n", "def react_agent_decision(video=None, audio=None, text=None, physio=None):\n", " reasoning_trace = []\n", " modality_results = []\n", " # 1. Observe from each modality if available\n", " if video is not None:\n", " # Use vision model to get lie probability\n", " # (Here we simulate by random since we don't have actual video frames)\n", " vision_prob = random.random()\n", " modality_results.append({'lie': vision_prob, 'truth': 1-vision_prob})\n", " reasoning_trace.append(f\"Vision analysis suggests lie probability {vision_prob:.2f}.\")\n", " if audio is not None:\n", " audio_prob = random.random()\n", " modality_results.append({'lie': audio_prob, 'truth': 1-audio_prob})\n", " reasoning_trace.append(f\"Audio analysis suggests lie probability {audio_prob:.2f}.\")\n", " if text is not None:\n", " # Use text model\n", " probs = text_model_predict([text]) # get [ [lie_prob, truth_prob] ]\n", " lie_prob = float(probs[0][0])\n", " modality_results.append({'lie': lie_prob, 'truth': float(probs[0][1])})\n", " reasoning_trace.append(f\"Text analysis suggests lie probability {lie_prob:.2f} for the statement.\")\n", " if physio is not None:\n", " physio_prob = random.random()\n", " modality_results.append({'lie': physio_prob, 'truth': 1-physio_prob})\n", " reasoning_trace.append(f\"Physiological analysis suggests lie probability {physio_prob:.2f}.\")\n", " \n", " if not modality_results:\n", " return \"No input provided\", None\n", " # 2. Reason: (In a more complex system, we could add additional logical rules or ask follow-up questions.)\n", " if len(modality_results) > 1:\n", " reasoning_trace.append(\"Combining all modalities to form a conclusion.\")\n", " else:\n", " reasoning_trace.append(\"Single modality provided, basing conclusion on that alone.\")\n", " \n", " # 3. Act: fuse results to get final decision\n", " decision, score = fuse_outputs(modality_results)\n", " reasoning_trace.append(f\"Final decision: {decision.upper()} (confidence {score:.2f}).\")\n", " \n", " return \"\\n\".join(reasoning_trace), decision\n", "\n", "# Example usage of ReAct agent:\n", "reasoning, decision = react_agent_decision(video=True, audio=True, text=\"I am telling the truth.\")\n", "print(\"Reasoning Trace:\\n\" + reasoning)\n", "print(\"Decision:\", decision)" ] }, { "cell_type": "markdown", "id": "1329ce16", "metadata": {}, "source": [ "## 7. Interactive Features\n", "To make the system interactive, we include features that allow user input and involvement:\n", "\n", "- **File Uploads**: Users can upload video, audio, or text for analysis. We use `ipywidgets` to provide UI elements (like file upload buttons) in Colab.\n", "- **Human-in-the-loop Validation**: After the model makes a decision, the user can review the reasoning and provide feedback or corrections. For example, if the model is wrong, the user could label the instance, which could be logged for further training.\n", "- **Explainability Tools**: We integrate LIME and SHAP to explain model predictions. For example, LIME can highlight which words in the text most influenced the prediction, or SHAP can indicate which facial features contributed to the vision model's output.\n", "\n", "These features help users trust and verify the system's outputs, turning the detection process into a cooperative effort between AI and human.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "1859e2e7", "metadata": {}, "outputs": [], "source": [ "# Interactive widget for file upload\n", "import ipywidgets as widgets\n", "\n", "# Create upload widgets for video, audio, text\n", "video_upload = widgets.FileUpload(accept=\".mp4,.mov,.avi\", description=\"Upload Video\", multiple=False)\n", "audio_upload = widgets.FileUpload(accept=\".wav,.mp3\", description=\"Upload Audio\", multiple=False)\n", "text_input = widgets.Textarea(placeholder='Enter text to analyze', description='Text:')\n", "\n", "# Display widgets\n", "display(video_upload)\n", "display(audio_upload)\n", "display(text_input)\n", "\n", "# Button to trigger analysis\n", "analyze_button = widgets.Button(description=\"Analyze\")\n", "output_area = widgets.Output()\n", "\n", "def on_analyze_clicked(b):\n", " with output_area:\n", " output_area.clear_output()\n", " vid_file = list(video_upload.value.values())[0] if video_upload.value else None\n", " aud_file = list(audio_upload.value.values())[0] if audio_upload.value else None\n", " txt = text_input.value if text_input.value else None\n", " reasoning, decision = react_agent_decision(video=vid_file, audio=aud_file, text=txt)\n", " print(\"Reasoning:\\n\" + reasoning)\n", " print(\"Decision:\", decision)\n", "\n", "analyze_button.on_click(on_analyze_clicked)\n", "display(analyze_button)\n", "display(output_area)" ] }, { "cell_type": "code", "execution_count": null, "id": "765ecaf3", "metadata": {}, "outputs": [], "source": [ "# Explainability Example with LIME (for text model)\n", "from lime.lime_text import LimeTextExplainer\n", "\n", "explainer = LimeTextExplainer(class_names=[\"Truth\", \"Lie\"])\n", "# We'll use the text model's predict function for probabilities\n", "if 'text_model_predict' in globals():\n", " exp = explainer.explain_instance(\"I swear I didn't do it\", \n", " lambda x: text_model_predict(x), \n", " num_features=5)\n", " # Display the explanation in notebook (as text)\n", " explanation = exp.as_list()\n", " print(\"Top influences for the text model prediction:\")\n", " for word, score in explanation:\n", " print(f\"{word}: {score:.3f}\")\n", "else:\n", " print(\"Text model not available for explanation.\")" ] }, { "cell_type": "markdown", "id": "f85ffbf7", "metadata": {}, "source": [ "## 8. Inference & Real-Time Processing\n", "Now that we have the components in place, we can use the system for inference on new data. This could be done in batch (one input at a time) or in real-time.\n", "\n", "For **real-time processing**, imagine a scenario like a live interview or interrogation. The system would continuously capture video frames and audio snippets, run them through the respective models, and update its deception probability in real-time. The ReAct agent can continuously reason over the new data.\n", "\n", "In this notebook setting, we'll simulate real-time processing by iterating through some data or using a loop with delays. In a real deployment, one could use threads or async processes to handle streaming data from a webcam and microphone.\n", "\n", "*Note:* Real-time use requires efficient processing and possibly hardware acceleration (GPU) to keep up with live data. There's also a need to smooth predictions over time to avoid jitter (e.g., using a rolling average of recent outputs).\n" ] }, { "cell_type": "code", "execution_count": null, "id": "4e15e160", "metadata": {}, "outputs": [], "source": [ "# Simulated real-time processing\n", "import time\n", "\n", "# Suppose we have a list of incoming text segments (as an example of streaming data)\n", "streaming_texts = [\n", " \"Hello, I'm happy to talk to you.\",\n", " \"I have nothing to hide.\",\n", " \"(nervous laugh) Sure, ask me anything...\",\n", " \"I already told you everything I know.\"\n", "]\n", "\n", "print(\"Starting live analysis loop...\\n\")\n", "for segment in streaming_texts:\n", " # Simulate delay as if processing streaming input\n", " time.sleep(1)\n", " reasoning, decision = react_agent_decision(text=segment)\n", " print(f\"Input: {segment}\\nDecision: {decision.upper()}\\n\")" ] }, { "cell_type": "markdown", "id": "de0440b8", "metadata": {}, "source": [ "## 9. Testing & Evaluation\n", "To ensure our system works as expected, we include testing and evaluation steps:\n", "\n", "- **Unit Tests**: We create simple tests for each component (e.g., check that the vision model outputs the correct shape, or the fusion function behaves correctly). In Python, one could use the `unittest` framework or simple `assert` statements for validation.\n", "- **Performance Evaluation**: If we have labeled test data, we can measure accuracy, F1-score, AUC, etc. Here we'll simulate predictions and compute a confusion matrix and classification report using scikit-learn.\n", "- **Fairness Assessments**: It's important to test the model for bias. If we had data tagged with demographics, we could check performance separately for each group to ensure consistency. We might also use techniques like counterfactual testing (e.g., swapping gender-specific words in text to see if prediction changes) to identify bias.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "8e1712b6", "metadata": {}, "outputs": [], "source": [ "# Simple Unit Test for Fusion Function\n", "assert fuse_outputs([{'lie':0.8,'truth':0.2}, {'lie':0.8,'truth':0.2}])[0] == 'lie', \"Fusion failed for obvious lie case\"\n", "assert fuse_outputs([{'lie':0.1,'truth':0.9}, {'lie':0.2,'truth':0.8}])[0] == 'truth', \"Fusion failed for obvious truth case\"\n", "print(\"Fusion function unit tests passed!\")\n", "\n", "# Simulated Performance Evaluation\n", "from sklearn.metrics import accuracy_score, f1_score, confusion_matrix, classification_report\n", "# Simulate some ground truth labels and predictions (1=lie, 0=truth)\n", "y_true = [0, 0, 1, 1, 1, 0]\n", "y_pred = [0, 1, 1, 1, 0, 0]\n", "print(\"Accuracy:\", accuracy_score(y_true, y_pred))\n", "print(\"F1-score:\", f1_score(y_true, y_pred, average='binary'))\n", "print(\"Confusion Matrix:\\n\", confusion_matrix(y_true, y_pred))\n", "print(\"Classification Report:\\n\", classification_report(y_true, y_pred, target_names=[\"Truth\",\"Lie\"]))" ] }, { "cell_type": "markdown", "id": "777b0ba6", "metadata": {}, "source": [ "## 10. Ethical Considerations\n", "Building a lie detection system raises important ethical questions. We conclude by addressing these aspects:\n", "\n", "- **Privacy**: Deception detection can be very invasive. Video and audio analysis might reveal sensitive information. It's crucial to obtain informed consent from individuals being analyzed and ensure data is stored securely (or not at all, in our design).\n", "- **Bias and Fairness**: As noted earlier, AI models can inadvertently learn biases. For example, certain facial expressions might be more common in some cultures but not indicate lying. We should continuously test for and mitigate bias. Techniques include balanced training data, bias correction algorithms, and human review of contentious cases.\n", "- **False Accusations**: No lie detector is 100% accurate – even humans are fallible. AI predictions should not be taken as absolute truth. The system should ideally express uncertainty (e.g., a confidence score) and allow for an appeal or secondary review process. The cost of wrongly accusing someone is high, so threshold for calling something a lie should be carefully chosen.\n", "- **Legal Compliance**: Different jurisdictions have laws about recording conversations, biometric data use, and the admissibility of lie detection in court. Any deployment of this technology must comply with privacy laws (like GDPR) and regulations regarding such tools. Also, organizations like the APA have ethical guidelines on lie detection usage.\n", "- **Responsible Deployment**: We emphasize that this project is a prototype. In practice, one should involve ethicists, legal experts, and psychologists before using an AI lie detection system in real-world situations. It should augment human judgment, not replace it.\n", "\n", "By considering these factors, developers and users of lie detection AI can aim to minimize harm and maximize the benefits of the technology." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "name": "python", "version": "3.9" } }, "nbformat": 4, "nbformat_minor": 5 }