Skip to content

Instantly share code, notes, and snippets.

View mikelgg93's full-sized avatar
👁️‍🗨️

Miguel García García mikelgg93

👁️‍🗨️
View GitHub Profile
@mikelgg93
mikelgg93 / README.md
Last active October 30, 2025 10:44
Add a column with wearer ID to any csv file with recording id using Cloud API

How to run it

  1. Download copy the file locally and any enrichments, or CSV file with recording id column
  2. Get a Cloud Token on your profile
  3. Modify the WORKSPACE_ID and CLOUD_TOKEN accordingly.
  4. Run:
# using Astral's UV
uv run --script append_wearer_id.py YOUR_CSV_FILE_PATH
@mikelgg93
mikelgg93 / README.md
Created October 28, 2025 12:11
Copy gaze offset from one recording to others on Cloud via API

Pupil Labs Cloud - Gaze Offset Copier

This is a command-line utility to copy the gaze_offset (Cloud gaze offset) from a single source recording to one or more target recordings within a Pupil Labs Cloud workspace. It's designed to make bulk-updating gaze offsets easy, especially when you have a good offset on one recording that you want to apply to many others.

Features

  • Interactive Mode: If you don't specify any target recordings, the script will fetch all recordings in your workspace and present an interactive, searchable list for you to select from.
  • Flexible Inputs: Specify target recordings individually, from a .txt file (one ID per line), or from a .json file (a list of ID strings).
  • Concurrent Updates: Applies updates to all target recordings asynchronously and concurrently, making it very fast.
  • Rich Reporting: Uses rich to display a clean table of which recordings succeeded or failed at the end.

Automatic Gaze Offset on Cloud using Pupil Core calibration marker.

This script provides an automated method to set the gaze offset on Pupil Labs Cloud based on Pupil Core's markers. By analyzing a segment of a recording where the wearer is looking at a known reference marker, it calculates the offset and sets it via Cloud API.

Pre-requisites

We will use Pupil Core stack to identify the marker on the video frames and compute the distance from gaze to that marker position over a period.

# /// script
# requires-python = ">=3.11"
# dependencies = [
# "click",
# "opencv-python",
# "pupil-labs-neon-recording",
# "tqdm",
# "ultralytics",
# "rich",
# "scipy",
@mikelgg93
mikelgg93 / detect_markers.py
Last active July 3, 2025 08:37
Detect Apriltags markers on any image.
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "numpy",
# "opencv-python",
# "pupil-apriltags",
# "click",
# "rich",
# "universal_pathlib",
# "requests",
@mikelgg93
mikelgg93 / ball_hand_rt.py
Last active October 23, 2025 09:55
Track hand pose and ball's position in realtime on Neon's scene camera using mediapipe and yolo11
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "click",
# "matplotlib",
# "numpy",
# "opencv-python",
# "pupil-labs-realtime-api",
# "rich",
# "pandas",
@mikelgg93
mikelgg93 / eyelid_plot.py
Created June 17, 2025 10:26
Plot eyelid aperture, blinks and worn state with pl-neon-recording and matplotlib.
# /// script
# requires-python = ">=3.11"
# dependencies = [
# "matplotlib",
# "pupil-labs-neon-recording",
# "numpy",
# ]
# ///
from itertools import groupby
@mikelgg93
mikelgg93 / yolo_plnr.py
Last active October 23, 2025 09:56
Plot YOLO detections on top of your scene camera with Gaze.
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "opencv-python",
# "pupil-labs-video",
# "pupil-labs-neon-recording",
# "ultralytics",
# "tqdm",
# ]
# ///
@mikelgg93
mikelgg93 / yolo_rt.py
Last active October 23, 2025 09:55
Run YOLO segmentation in real-time over your Neon' scene camera stream with gaze.
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "opencv-python",
# "pupil-labs-realtime-api",
# "ultralytics",
# "click",
# "lap",
# ]
# ///
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "numpy",
# "opencv-python-headless",
# "pymupdf",
# ]
# ///
import argparse