Skip to content

Instantly share code, notes, and snippets.

FROM nvidia/cuda:12.6.2-base-ubuntu22.04
LABEL com.danswer.maintainer="[email protected]"
LABEL com.danswer.description="This image is for the Danswer model server which runs all of the \
AI models for Danswer. This container and all the code is MIT Licensed and free for all to use. \
You can find it at https://hub.docker.com/r/danswer/danswer-model-server. For more details, \
visit https://github.com/danswer-ai/danswer."
# Default DANSWER_VERSION, typically overriden during builds by GitHub Actions.
ARG DANSWER_VERSION=0.3-dev
@onimsha
onimsha / main.rs
Created December 19, 2022 07:39
wukong_text_color
use owo_colors::OwoColorize;
fn main() {
// id: 1, slug: e-mail
println!(
"My email is {}!",
"[email protected]".bright_blue()
);
// id: 2, slug: github-handle
println!("My github handle is {}!", "@darkhorse".bright_green());

A Path to learn SRE

What are the objectives ?

You only have limited resources (time, money, etc...), so you must have a clear objective before starting to learn about SRE stuff.
Considering current job market for SRE position, it's pretty obvious that most of the companies out there are using these 2 things to build their infrastructure: AWS and Kubernetes (K8s). So in short, our objectives should stick to these things, meaning getting skills to be able to compete with other engineers in terms of AWS and K8s.

With that assumption, there are 2 major objectives we need to achieve: Get some AWS Certificates and K8s Certificates. Here are reasons:

  • Without prior experiences, certificate(s) is the most solid evidence about your knowledge on specific domain
  • If you decide to invest your time and money to something that gonna help you find a good job, you better have some results from that investment. I know AWS simply not enough to convince anyonem, but a certificate does.
@onimsha
onimsha / prometheus.yml
Created September 4, 2019 07:45
Prometheus with consul discovery
global:
scrape_timeout: 10s
scrape_interval: 15s
external_labels:
cluster: 'MY CLUSTER NAME'
scrape_configs:
- job_name: 'consul-services'
@onimsha
onimsha / prometheus.yml
Created September 4, 2019 07:44 — forked from KekSfabrik/prometheus.yml
prometheus consul SD config
global:
scrape_timeout: 10s
scrape_interval: 15s
external_labels:
cluster: 'MY CLUSTER NAME'
# alternatively can be found via consul -- for details see
# https://prometheus.io/docs/prometheus/latest/migration/#alertmanager-service-discovery
alerting:
alertmanagers:
version: '3'
services:
consul-agent-1: &consul-agent
image: consul:latest
networks:
- consul-demo
command: "agent -retry-join consul-server-bootstrap -client 0.0.0.0"
ports:

Keybase proof

I hereby claim:

  • I am onimsha on github.
  • I am alexuiza (https://keybase.io/alexuiza) on keybase.
  • I have a public key ASD-6zcuuI_cNssg9EuoNASSoUn1fLPKzn3iHcQZ7uE_Nwo

To claim this, I am signing this object:

@onimsha
onimsha / ubuntu-kernel-tuning.sh
Created May 8, 2019 01:32 — forked from soediro/ubuntu-kernel-tuning.sh
ubuntu kernel tuning script
#!/bin/bash
#vim: set expandtab tabstop=4 shiftwidth=4 softtabstop=4:
#
# Author : Nicolas Brousse <[email protected]>
# From : https://www.shell-tips.com/2010/09/13/linux-sysctl-configuration-and-tuning-script/
#
# Added kernel version < 2.6.33 set net.ipv4.tcp_congestion_control=htcp
# Notes :
# This script is a simple "helper" to configure your sysctl.conf on linux
# There is no silver bullet. Don't expect the perfect setup, review comments
@onimsha
onimsha / k8s-cluster.json
Created April 7, 2019 14:35 — forked from JCMais/k8s-cluster.json
Grafana dashboards from kubernetes app but using prometheus metrics instead of the datasource from the plugin
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
!#/bin/bash
# Get your node name.
export YOUR_NODE_NAME=`tmn inspect |head -n1 | awk '{print $2}'`
# Get the latest chaindata here. it will be placed into /tmp in your filesystem
wget https://s3-ap-southeast-1.amazonaws.com/tomochain/backup/mainnet/20181224-chaindata.tar -P /tmp
# Extracting data
cd /tmp && tar xvf 20181224-chaindata.tar