Skip to content

Instantly share code, notes, and snippets.

@NetHole
NetHole / sway.shader
Created July 24, 2024 07:11 — forked from mandarinx/sway.shader
Sway shader
Shader "Toon/Lit Swaying" {
Properties {
_Color ("Main Color", Color) = (0.5,0.5,0.5,1)
_MainTex ("Base (RGB)", 2D) = "white" {}
_Ramp ("Toon Ramp (RGB)", 2D) = "gray" {}
_Speed ("MoveSpeed", Range(20,50)) = 25 // speed of the swaying
_Rigidness("Rigidness", Range(1,50)) = 25 // lower makes it look more "liquid" higher makes it look rigid
_SwayMax("Sway Max", Range(0, 0.1)) = .005 // how far the swaying goes
_YOffset("Y offset", float) = 0.5// y offset, below this is no animation
@NetHole
NetHole / Outline.shader
Created July 24, 2024 06:04 — forked from ScottJDaley/Outline.shader
Wide Outlines Renderer Feature for URP and ECS/DOTS/Hybrid Renderer
// Original shader by @bgolus, modified slightly by @alexanderameye for URP, modified slightly more
// by @gravitonpunch for ECS/DOTS/HybridRenderer.
// https://twitter.com/bgolus
// https://medium.com/@bgolus/the-quest-for-very-wide-outlines-ba82ed442cd9
// https://alexanderameye.github.io/
// https://twitter.com/alexanderameye/status/1332286868222775298
Shader "Hidden/Outline"
{
Properties
@NetHole
NetHole / wd1-3-release.md
Created March 13, 2023 06:44 — forked from harubaru/wd1-3-release.md
Official Release Notes for Waifu Diffusion 1.3
@NetHole
NetHole / InputModeDetector.cpp
Created February 22, 2023 06:53 — forked from sinbad/InputModeDetector.cpp
UE4 detecting which input method was last used by each player
#include "InputModeDetector.h"
#include "Input/Events.h"
FInputModeDetector::FInputModeDetector()
{
// 4 local players should be plenty usually (will expand if necessary)
LastInputModeByPlayer.Init(EInputMode::Mouse, 4);
}
bool FInputModeDetector::HandleKeyDownEvent(FSlateApplication& SlateApp, const FKeyEvent& InKeyEvent)
@NetHole
NetHole / pg-pong.py
Created May 7, 2020 08:24 — forked from karpathy/pg-pong.py
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward