Skip to content

Instantly share code, notes, and snippets.

View danbri's full-sized avatar

Dan Brickley danbri

View GitHub Profile
@danbri
danbri / gist:0e3dbcd53c2ccc3f2eb3054026153cd5
Created October 17, 2025 12:49
Text of foaf rdfweb codepiction page
Photo metadata: the co-depiction experiment
author:
Dan Brickley <[email protected]>
Overview
This document is intended to provide an illustrated walkthrough of the RDFWeb distributed photo-metadata experiment. Right now it is pretty bare, but hopefully will give you an idea of what we're attempting.
Nearby: co-depiction online demo | co-depiction paths demo
@danbri
danbri / gist:7893ccd0cc6f5232320b811843354c9d
Created October 16, 2025 07:57
Heygen veo3 hamlet tester
Defining the Narrative Core
I'm currently focused on the core concept: a struggling, melodramatic actor navigating an interactive fiction game on an iPad. The blend of Shakespearean drama and a modern Instagram UI feels promising. I'm exploring the comedic potential of the actor's reactions to the game's choices, particularly during Hamlet's soliloquy.
Mapping the Scene Elements
I've broken down the prompt: a needy actor, INK-like game, Instagram UI, Hamlet's soliloquy. Thinking about the duration and orientation, I'm aiming for portrait, around 30 seconds. I'm leaning toward "Fans of Shakespeare, interactive fiction, and humor" as the target audience. I'm envisioning the actor dramatically reacting to the game's choices during the soliloquy, which is the core of the comedic bit.
Formulating the Structure
# === COLAB / GIST NOTEBOOK ===
# You can run this whole cell as-is in Google Colab.
# It will:
# 1) Clone nanoGPT
# 2) Add a Complex-LoRA module (block-real implementation)
# 3) Provide a runtime patcher that wraps attention/MLP linears
# 4) Smoke-test a forward pass; optional tiny training for a few steps
# --------
# CELL 1: Environment setup
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
# Comparison of SDR + MCP + LLM Proposals
This document compares three related proposals for integrating **software-defined radio (SDR)** systems with **Model Context Protocol (MCP)** interfaces and **LLM-driven control**.
Projects studied:
- [SDRAngel driven by Ollama LLM using MCP](https://www.pg540.org/wiki/index.php/SDRAngel_driven_by_Ollama_LLM_using_MCP)
- [gr-mcp](https://github.com/yoelbassin/gr-mcp)
- [AetherLink SDR-MCP](https://www.juheapi.com/mcp-servers/N-Erickson/AetherLink-SDR-MCP)
---
@danbri
danbri / index.html
Created August 31, 2025 18:31
layerfake2
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="viewport-fit=cover,width=device-width,initial-scale=1.0" />
<title>Neon Triples</title>
<style>
:root {
--pill-bg: rgba(0, 0, 0, .55);

`

Phrases and Patterns to Avoid in LLM Outputs

Language & Tone Issues

Avoid these phrases:

  • "serves/stands as a testament to"
  • "plays a vital/significant role"
  • "underscores its importance"
  • "continues to captivate"
  • "leaves a lasting impact"
# MCP API Search Log: Fishing Claims Investigation
## Session: 2024-11-15 14:32:00 UTC
### Initial Probes (14:32:00 - 14:33:45)
```log
[14:32:01] factcheck_search("fish")
[14:32:02] RESULTS: 3 claims found
- Claim ID: FC-2019-0892: "Eating fish twice a week prevents heart disease" (Rating: Mostly True, Source: HealthFactCheck.org, Date: 2019-03-12)
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no, viewport-fit=cover">
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
<meta name="mobile-web-app-capable" content="yes">
<title>Bristol Vector Hunt - LOD</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js"></script>