Skip to content

Instantly share code, notes, and snippets.

View mstatt's full-sized avatar
🎯
Forever trying to focus on better!

Michael Stattelman mstatt

🎯
Forever trying to focus on better!
View GitHub Profile
@mstatt
mstatt / gist:cea118190fa371b8e37631fa9ef2b916
Created August 6, 2025 08:55
The single page html for implementing the FALCONLENS Image Forensic Solution
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>FALCONLENS // Image Forensics v1.2</title>
<style>
@import url('https://fonts.googleapis.com/css2?family=Orbitron:wght@400;700&family=Roboto+Mono:wght@300;400&display=swap');
:root {
@mstatt
mstatt / gist:3179acb594509c729b10c7afc09b6f91
Last active February 20, 2025 21:10
Streamlit Fear monger detection
# pip install streamlit pytube youtube-transcript-api transformers
# streamlit run app.py
import streamlit as st
from youtube_transcript_api import YouTubeTranscriptApi
import re
import pandas as pd
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline
import nltk
@mstatt
mstatt / Liar-Ai.md
Created February 9, 2025 06:53 — forked from ruvnet/Liar-Ai.md
Liar Ai: Multi-Modal Lie Detection System

Multi-Modal Lie Detection System using an Agentic ReAct Approach: Step-by-Step Tutorial

Author: rUv
Created by: rUv, cause he could


WTF? The world's most powerful lie dector.

🤯 Zoom calls will never be the same. I think I might have just created the world’s most powerful lie detector tutorial using deep research.

import os
import cv2
import torch
from PIL import Image
from transformers import AutoProcessor ,AutoImageProcessor
from transformers import AutoModelForCausalLM
## -------------------------------------------------------------------------------------------------------------------
@mstatt
mstatt / gist:4b192a584803c7d264fc1540243b0d12
Last active July 17, 2023 20:23
Prompt Engineering 101
Prompt engineering refers to the process of designing and refining prompts for language models or AI systems. It involves crafting specific instructions or queries to elicit desired responses from the model.
Prompt engineering is crucial because language models like ChatGPT, as powerful as they are, require clear and well-defined input to generate accurate and relevant outputs. By carefully constructing prompts, developers can guide the model's behavior, improve its performance, and make it more reliable for specific tasks or applications.
Effective prompt engineering involves several strategies:
1. Specifying the format: Designing prompts that specify the desired format of the response. For example, using placeholders like "[Title]" or "[Person]" to indicate where specific information should be inserted.
2. Providing context: Giving the model relevant background information or context to improve its understanding of the task or question at hand.
# USAGE
# python basic-ocr-with-spellcheck.py --image <imagename>
# import the necessary packages
from textblob import TextBlob
import pytesseract
import argparse
import cv2
# construct the argument parser and parse the arguments