Skip to content

Instantly share code, notes, and snippets.

View odellus's full-sized avatar
🦍

Thomas Wood odellus

🦍
View GitHub Profile
@odellus
odellus / qwen3-coder-streaming-v2.jinja
Created November 1, 2025 13:13
this ACTUALLY WORKS on that glm-4.5 tool calling PR of llama.cpp
{%- if not tools is defined %}
{%- set tools = [] %}
{%- endif %}
{# Extract system message if present #}
{%- if messages[0]["role"] == "system" %}
{%- set system_message = messages[0]["content"] %}
{%- set loop_messages = messages[1:] %}
{%- else %}
{%- set loop_messages = messages %}
@odellus
odellus / bash_run_qwen3.sh
Created November 1, 2025 13:11
qwen3 coder through llama-server and it actually works
# THIS WORKS!!!
/home/thomas-wood/src/pr/llama.cpp/build/bin/llama-server \
-m /home/thomas-wood/.cache/llama.cpp/lmstudio-community_Qwen3-Coder-30B-A3B-Instruct-GGUF_Qwen3-Coder-30B-A3B-Instruct-Q4_K_M.gguf \
-c 120000 \
--host 0.0.0.0 \
--port 1234 \
-fa 1 \
-ngl 99 \
-b 2048 \
cd my-theia-app/electron-app
yarn add [email protected] --dev
npx electron-rebuild -f -w node-pty,keytar,native-keymap,drivelist,cpu-features,ssh2
npx theia build --mode development
ls lib/backend/ # should see electron-main.js
npx theia start
{% macro render_item_list(item_list, tag_name='required') %}
{%- if item_list is defined and item_list is iterable and item_list | length > 0 %}
{%- if tag_name %}{{- '\n<' ~ tag_name ~ '>' -}}{% endif %}
{{- '[' }}
{%- for item in item_list -%}
{%- if loop.index > 1 %}{{- ", "}}{% endif -%}
{%- if item is string -%}
{{ "`" ~ item ~ "`" }}
{%- else -%}
{{ item }}
@odellus
odellus / qwen3-proper.jina
Created October 26, 2025 07:02
actually working jinja template for qwen3 coder
{%- if not tools is defined %}
{%- set tools = [] %}
{%- endif %}
{# Extract system message if present #}
{%- if messages[0]["role"] == "system" %}
{%- set system_message = messages[0]["content"] %}
{%- set loop_messages = messages[1:] %}
{%- else %}
{%- set loop_messages = messages %}
"language_models": {
"openai_compatible": {
"lm-studio": {
"api_url": "http://localhost:1234/v1",
"available_models": [
{
"name": "glm-4.5-air@q4_k_m",
"max_tokens": 131072,
"max_output_tokens": 4000,
"max_completion_tokens": 131072,
"context_servers": {
"some-mcp-server": {
"source": "custom",
"enabled": true,
"command": "uv",
"args": [
"run",
"--project",
"/Users/thomas.wood/src/smolagents-example",
"/Users/thomas.wood/src/smolagents-example/search.py"

SearXNG setup

Step 1 - Install SearXNG

Open a terminal in MacOS or Linux [if you are using Windows consider your life choices up to this point and download a real operating system please] and do the following

They have detailed instructions [elsewhere][] on how to do this but I am trying to keep this self-contained.

mkdir -p ~/some/path/for/searxng/searxng
@odellus
odellus / env_config.json
Created July 6, 2025 13:18
environment configuration for sim2real lerobot
{
"base_camera_settings": {
"pos": [0.020, 0.190, 0.505],
"fov": 0.655,
"target": [0.151, 0.10, 0.15]
},
"greenscreen_overlay_path": "bg.png",
"spawn_box_pos": [0.3, 0.05],
"spawn_box_half_size": 0.1,
"domain_randomization_config": {
{"homing_offset": [-2106, 4058, -107, -2049, 1011, -2400], "drive_mode": [0, 1, 0, 0, 1, 0], "start_pos": [2073, 4080, 101, 1994, 995, 1990], "end_pos": [3130, -3034, 1131, 3073, 13, 3424], "calib_mode": ["DEGREE", "DEGREE", "DEGREE", "DEGREE", "DEGREE", "LINEAR"], "motor_names": ["shoulder_pan", "shoulder_lift", "elbow_flex", "wrist_flex", "wrist_roll", "gripper"]}