Skip to content

Instantly share code, notes, and snippets.

View rubentea16's full-sized avatar
:electron:
πŸ‘¨β€πŸ’»πŸ“Š - data wizard πŸ§™β€β™‚οΈ

Ruben Stefanus rubentea16

:electron:
πŸ‘¨β€πŸ’»πŸ“Š - data wizard πŸ§™β€β™‚οΈ
View GitHub Profile
@rubentea16
rubentea16 / pr_etiquette.md
Created June 30, 2022 07:04 — forked from mikepea/pr_etiquette.md
Pull Request Etiquette

Pull Request Etiquette

Why do we use a Pull Request workflow?

PRs are a great way of sharing information, and can help us be aware of the changes that are occuring in our codebase. They are also an excellent way of getting peer review on the work that we do, without the cost of working in direct pairs.

Ultimately though, the primary reason we use PRs is to encourage quality in the commits that are made to our code repositories

Done well, the commits (and their attached messages) contained within tell a story to people examining the code at a later date. If we are not careful to ensure the quality of these commits, we silently lose this ability.

Semantic Commit Messages

See how a minor change to your commit message style can make you a better programmer.

Format: <type>(<scope>): <subject>

<scope> is optional

Example

model = LightningMNISTClassifier(lr_rate=1e-3)
# Learning Rate Logger
lr_logger = LearningRateLogger()
# Set Early Stopping
early_stopping = EarlyStopping('val_loss', mode='min', patience=5)
# saves checkpoints to 'model_path' whenever 'val_loss' has a new min
checkpoint_callback = ModelCheckpoint(filepath=model_path+'mnist_{epoch}-{val_loss:.2f}',
def prepare_data():
# transforms for images
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
# prepare transforms standard to MNIST
mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transform)
mnist_train = [mnist_train[i] for i in range(2200)]
mnist_train, mnist_val = random_split(mnist_train, [2000, 200])
# Custom Callbacks
class MyPrintingCallback(pl.callbacks.base.Callback):
def on_init_start(self, trainer):
print('Starting to init trainer!')
def on_init_end(self, trainer):
print('trainer is init now')
def on_train_end(self, trainer, pl_module):
@rubentea16
rubentea16 / model.py
Created June 27, 2020 11:46
PyTorch Lightning Model Template
import torch
import pytorch_lightning as pl
import os
from torch import nn
from torch.nn import functional as F
from torch.utils.data import TensorDataset, DataLoader, random_split
from torchvision.datasets import MNIST
from torchvision import datasets, transforms
@rubentea16
rubentea16 / multipreprocessing.py
Created May 23, 2020 16:42
multipreprocessing technique
import concurrent.futures
import time
start = time.perf_counter()
def do_something(seconds):
print(f'Sleeping {seconds} second(s)...')
time.sleep(seconds)
return f'Done Sleeping...{seconds}'