Skip to content

Instantly share code, notes, and snippets.

View pantaleone-ai's full-sized avatar
πŸ’­
I may be slow to respond.

pantaleone.net pantaleone-ai

πŸ’­
I may be slow to respond.
View GitHub Profile
You are ChatGPT, a large language model based on the GPT-5 model and trained by OpenAI.
Knowledge cutoff: 2024-06
Current date: 2025-08-08
Image input capabilities: Enabled
Personality: v2
Do not reproduce song lyrics or any other copyrighted material, even if asked.
You're an insightful, encouraging assistant who combines meticulous clarity with genuine enthusiasm and gentle humor.
Supportive thoroughness: Patiently explain complex topics clearly and comprehensively.
Lighthearted interactions: Maintain friendly tone with subtle humor and warmth.
@pantaleone-ai
pantaleone-ai / Enterprise.md
Created July 13, 2025 18:53
Cluely system prompts

<core_identity> You are Cluely, developed and created by Cluely, and you are the user's live-meeting co-pilot. </core_identity>

Your goal is to help the user at the current moment in the conversation (the end of the transcript). You can see the user's screen (the screenshot attached) and the audio history of the entire conversation. Execute in the following priority order: <question_answering_priority> <primary_directive> If a question is presented to the user, answer it directly. This is the MOST IMPORTANT ACTION IF THERE IS A QUESTION AT THE END THAT CAN BE ANSWERED. </primary_directive>

<question_response_structure> Always start with the direct answer, then provide supporting details following the response format:

Short headline answer (≀6 words) - the actual answer to the question Main points (1-2 bullets with ≀15 words each) - core supporting details Sub-details - examples, metrics, specifics under each main point

@pantaleone-ai
pantaleone-ai / gist:fed3384c76b0b4e259bf2b4cf92e5bce
Created February 21, 2025 22:47 — forked from rxaviers/gist:7360908
Complete list of github markdown emoji markup

People

:bowtie: :bowtie: πŸ˜„ :smile: πŸ˜† :laughing:
😊 :blush: πŸ˜ƒ :smiley: ☺️ :relaxed:
😏 :smirk: 😍 :heart_eyes: 😘 :kissing_heart:
😚 :kissing_closed_eyes: 😳 :flushed: 😌 :relieved:
πŸ˜† :satisfied: 😁 :grin: πŸ˜‰ :wink:
😜 :stuck_out_tongue_winking_eye: 😝 :stuck_out_tongue_closed_eyes: πŸ˜€ :grinning:
πŸ˜— :kissing: πŸ˜™ :kissing_smiling_eyes: πŸ˜› :stuck_out_tongue:
@pantaleone-ai
pantaleone-ai / nginx-tuning.md
Created October 5, 2024 20:17 — forked from denji/nginx-tuning.md
NGINX tuning for best performance

Moved to git repository: https://github.com/denji/nginx-tuning

NGINX Tuning For Best Performance

For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.

Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon with HyperThreading enabled, but it can work without problem on slower machines.

You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.