Skip to content

Instantly share code, notes, and snippets.

import ray
# Initialize Ray
ray.init()#runtime_env={"env_vars": {"HF_TOKEN": "..."}})
# Create a sample dataset with prompts
data = [
{"prompt": "What is the capital of France?"},
{"prompt": "Explain quantum computing in one sentence."},
{"prompt": "Write a haiku about programming."},
"""
Large-scale deduplication using Ray Data with Spark-like operations.
Based on the approach from: https://huggingface.co/blog/dedup
This implementation uses Ray Data's native operations (map_batches, groupby, etc.)
to implement MinHash + LSH deduplication, similar to the Spark approach.
Architecture:
1. MinHash signature generation (map_batches)
#!/usr/bin/env python3
"""
vLLM multi-node deployment script.
Automatically handles Ray cluster setup and vLLM server launch.
This script simplifies vLLM multi-node deployment by automatically handling the Ray cluster
setup and vLLM server launch based on the current node's role, eliminating the need for
multiple terminals and manual Ray cluster management.
Usage Examples:
import argparse
from datetime import datetime, timedelta
from typing import Dict
import numpy as np
import pandas as pd
# from benchmark import Benchmark
import ray
#!/usr/bin/env python3
"""
dataloader_benchmark.py - A script to benchmark Ray Data performance on image datasets.
"""
import ray
import argparse
import os
import time
import torch
import torch.utils.data
#!/usr/bin/env python3
"""
dataloader_benchmark.py - A script to benchmark PyTorch DataLoader performance on image datasets.
"""
import argparse
import os
import time
import torch
import torch.utils.data
#!/usr/bin/env python3
"""
dataloader_benchmark.py - A script to benchmark PyTorch DataLoader performance on image datasets.
"""
import argparse
import os
import time
import torch
import torch.utils.data

Ray Data LLM

High-performance batch inference for large language models, powered by Ray Data.

Overview

Ray Data LLM provides an efficient, scalable solution for batch processing LLM inference workloads with:

  • High Throughput: Optimized performance using vLLM's paged attention and continuous batching
  • Distributed Processing: Scale across multiple GPUs and machines using Ray Data

RayLLM-Batch

Engine Configurations

Here are the full supported engine configurations:

model_id: <HF model ID or local model path>
llm_engine: vllm
accelerator_type: <GPU type>

Ray Data LLM

High-performance batch inference for large language models, powered by Ray Data.

Overview

Ray Data LLM provides an efficient, scalable solution for batch processing LLM inference workloads with:

  • High Throughput: Optimized performance using vLLM's paged attention and continuous batching
  • Distributed Processing: Scale across multiple GPUs and machines using Ray Data