Skip to content

Instantly share code, notes, and snippets.

@edgarriba
Last active August 5, 2016 07:08
Show Gist options
  • Select an option

  • Save edgarriba/df8ae6e567cb85ea62e7d20a8081f89a to your computer and use it in GitHub Desktop.

Select an option

Save edgarriba/df8ae6e567cb85ea62e7d20a8081f89a to your computer and use it in GitHub Desktop.
@startuml
enum DeviceType {
CPU
GPU
}
enum BackendType {
TINYCNN
AVX
NNPACK
LIBDNN
OPENCL
}
class Session {
- String name
- void tuneKernels()
+ void register_device(Device* device)
+ void run()
}
class Device {
- int id
- DeviceType device_type
- Bool checkAvailability(Layer* layer)
+ Device(Device device_type, int id)
+ void register_op(Layer* layer)
}
class TinyCNNDevice {
}
class AvxDevice {
}
class NNPackDevice {
}
class OpenCLDevice {
}
Device <|-- CPUDevice
Device <|-- GPUDevice
CPUDevice <|-- TinyCNNDevice
CPUDevice <|-- AvxDevice
CPUDevice <|-- NNPackDevice
CPUDevice <|-- OpenCLDevice
GPUDevice <|-- OpenCLDevice
class DeviceContext {
}
class OpenCLDeviceContext {
- CLCudaAPI::Platform platform
- CLCudaAPI::Device device
- CLCudaAPI::Context context
- CLCudaAPI::Queue queue
}
DeviceContext <|-- OpenCLDeviceContext
OpenCLDeviceContext --> KernelLauncher
class KernelLauncher {
}
class LibDNNKernelLauncher {
- greentea::LibDNNConv<float_t> kernel
}
class CLCudaAPIKernelLauncher {
- CLCudaAPI::Kernel kernel
}
KernelLauncher <|-- LibDNNKernelLauncher
KernelLauncher <|-- CLCudaAPIKernelLauncher
class Node {
}
class Edge {
- shape3d shape
}
abstract class Layer {
- String layer_type
- BackendType backend_type
- void tuneKernel()
+ virtual void forward_propagation()
+ virtual void backward_propagation()
}
class DeviceContext {
}
abstract ConvolutionalLayer {
+ virtual void forward_propagation()
+ virtual void backward_propagation()
}
class Conv2dTinyOp {
}
class Conv2dAvxOp {
}
class Conv2dNNPackOp {
}
class Conv2dOpenCLOp {
}
class Conv2dLibDNNOp {
}
Layer <|-- ConvolutionalLayer
ConvolutionalLayer <|-- Conv2dTinyOp
ConvolutionalLayer <|-- Conv2dAvxOp
ConvolutionalLayer <|-- Conv2dNNPackOp
ConvolutionalLayer <|-- Conv2dOpenCLOp
ConvolutionalLayer <|-- Conv2dLibDNNOp
Conv2dOpenCLOp --> CLCudaAPIKernelLauncher
Conv2dLibDNNOp --> LibDNNKernelLauncher
abstract MaxPoolingLayer {
+ virtual void forward_propagation()
+ virtual void backward_propagation()
}
Layer -- DeviceContext
DeviceContext -- OpenCLDevice
Device *-- Layer
Session *-- Device
Node <|-- Layer
Node *-- Edge : prev
Node *-- Edge : next
Edge --> Node : prev
Edge *-- Node : next
Edge *-- Data : data
Edge *-- Data : grad
Layer <|-- MaxPoolingLayer
@enduml
@edgarriba
Copy link
Author

@edgarriba
Copy link
Author

hi! I open this gist to iterate over the framework design. IMO, we should reshape the backend API since now we need to share device context between layers depending on the user (or possible future automatic mechanism) specifications like device type, layer backend and type of layer (case of LibDNN for Convs)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment