Last active
December 10, 2016 20:30
-
-
Save CMCDragonkai/ffaba89f2b97072498bc to your computer and use it in GitHub Desktop.
Haskell: Tensor Dimensional Projection (like Numpy's Tuple Selection Objects)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| -- with regards to: http://stackoverflow.com/questions/32630035/replicating-numpys-advanced-indexing-slicing-in-haskell | |
| -- let's create some tensors: | |
| -- 5 rows, 4 columns | |
| two_d_matrix = | |
| [ | |
| [ | |
| 0 | |
| | _ <- [0..3]] | |
| | _ <- [0..4]] | |
| -- 5 rows, 4 columns, 2 layers, every block contains a 0 | |
| three_d_tensor = | |
| [ | |
| [ | |
| [ | |
| 0 | |
| | _ <- [0..1]] | |
| | _ <- [0..3]] | |
| | _ <- [0..4]] | |
| -- notice the free variables x, y, z | |
| -- the closure of the field expression includes the indices (row, column, layer) | |
| -- this allows you to change the value depending on the position of the field | |
| -- here we just tuple construct the x, y, z index in each block | |
| -- these are basically voxel blocks! | |
| interesting_tensor = | |
| [ | |
| [ | |
| [ | |
| (x,y,z) | |
| | z <- [0..1]] | |
| | y <- [0..3]] | |
| | x <- [0..4]] | |
| -- while the above works with lists, I'm sure there ways to rebind the comprehension | |
| -- syntax to operate on vectors or other functor types | |
| -- see monad comprehensions | |
| -- now for numpy arrays, the subset of the functionality to be implemented is this: | |
| -- `two_d_array[0:1:1, 0:4:2]` | |
| -- gets the first row stepped by 1 containing the first 2 columns stepped by 2 | |
| -- step 1 means skipping 0, step 2 means skipping 1 | |
| -- all this does is reverse function application | |
| (!!!) = flip ($) | |
| infixl 4 !!! -- composable: tensor !!! projection !!! projection | |
| -- ft is functor transformation | |
| -- et is element transformation (but it could also be just another sub-functor) | |
| (!.) :: (Functor f) => (f b -> f c) -> (a -> b) -> f a -> f c | |
| ft !. et = ft . fmap et | |
| infixr 5 !. | |
| slice :: Int -> Int -> [a] -> [a] | |
| slice from to xs = take (to - from + 1) (drop from xs) | |
| step :: Int -> [a] -> [a] | |
| step n = map head . takeWhile (not . null) . iterate (drop n) | |
| stepSlice :: Int -> Int -> Int -> [a] -> [a] | |
| stepSlice b e s t = (step s . slice b e) t | |
| result = interesting_tensor !!! | |
| slice 1 2 !. -- 1d: slice row 1 to row 2 | |
| step 2 !. -- 2d: step 2 cols (skip 1 col) | |
| stepSlice 1 3 2 -- 3d: slice layer 1 to layer 3, then step 2 layers (skip 1 layer) | |
| {- | |
| result = [[[(1,0,1)],[(1,2,1)]],[[(2,0,1)],[(2,2,1)]]] | |
| [ 2 rows | |
| [ 2 columns | |
| [(1,0,1)], 1 layer | |
| [(1,2,1)] | |
| ], | |
| [ | |
| [(2,0,1)], | |
| [(2,2,1)] | |
| ] | |
| ] | |
| It's 2 rows (slice 1 2), 2 columns (step 2 columns from 4 columns), 1 layer (stepSlice 1 3 2 only gives back the 1st layer). | |
| -} | |
| {- | |
| The way this works is that `(!.)` is a form of nested composition. Unlike normal composition which would | |
| look like `(f b -> f c) -> (f a -> f b) -> f a -> f c ~ (b -> c) -> (a -> b) -> a -> c`. The `(!.)` takes | |
| a morphism working on the structure (projection preserves the functor type), and a morphism working on the contents and composes them together. | |
| We get a composed nested morphism, with 2 levels, the top-level works on the container, the bottom-level | |
| works on the contents. The key here is that the bottom-level may also be working on a sub container. | |
| This reveals that you can use normal composition to do the same thing: | |
| -} | |
| (!>) :: (Functor f) => (f b -> f c) -> (f a -> f b) -> f a -> f c | |
| ct1 !> ct2 = ct1 . ct2 | |
| infixr 5 !> | |
| result2 = interesting_tensor !!! (slice 1 2 !> fmap ((step 2) !> (fmap $ step 2 . slice 1 3))) | |
| correct = result == result2 | |
| -- also possible is list selection object | |
| -- for linked lists, the worst case complexity using this is O(m*n) | |
| -- m is the length of the indices | |
| -- n is the length of the list | |
| -- you should be using vectors or arrays if you really want to do this | |
| -- and a difference list can be used instead of reversing at the end | |
| select :: [Int] -> [a] -> [a] | |
| select indices list = go indices list [] where | |
| go [] _ results = | |
| reverse results | |
| go (i:is) list results = | |
| go is list ((list !! i):results) | |
| -- featuring duplication and change of order | |
| result3 = interesting_tensor !!! | |
| select [0,1] !. | |
| select [1,1] !. | |
| select [1, 0] | |
| {- | |
| The end result is that we compose together an n-dimensional projection function, then apply the projection | |
| onto the tensor. | |
| -} | |
| -- hey look we can even do this on infinite tensors! | |
| infinite_tensor = | |
| [ | |
| [ | |
| [ | |
| (x,y,z) | |
| | z <- [0..]] | |
| | y <- [0..]] | |
| | x <- [0..]] | |
| result4 = infinite_tensor !!! slice 1 10 !. stepSlice 1 10 2 !. slice 3 4 | |
| -- reversed stepping is still not available, but it can be easily done for linked lists by first reversing the list | |
| -- for constant time data structures, instead of reversing the list, one can access the array from the end | |
| -- the above code should be generalisable to an "Indexable" typeclass, which is subclass of the Functor class | |
| -- Matrix, Tensors, Vectors should all allow similar operations |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment