- 
      
 - 
        
Save seanjensengrey/572cffee2574ae2adf24f3831b9d9e24 to your computer and use it in GitHub Desktop.  
Revisions
- 
        
seanjensengrey revised this gist
Nov 11, 2017 . 1 changed file with 1 addition and 1 deletion.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -55,6 +55,6 @@ exploding early on. [1] plevyak's iterative flow analysis [http://www.plevyak.com/ifa-submit.pdf](http://www.plevyak.com/ifa-submit.pdf) [2] ole agesen's excellent Phd thesis [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.4969](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.4969&rep=rep1&type=pdf) [3] Mark Dufour's MsC thesis [http://mark.dufour.googlepages.com/shedskin.pdf](http://mark.dufour.googlepages.com/shedskin.pdf)  - 
        
seanjensengrey revised this gist
Nov 11, 2017 . 1 changed file with 4 additions and 4 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -32,17 +32,17 @@ gaps'. # iterative_dataflow_analysis(): ## FORWARD PHASE * propagate types along constraint graph (propagate()) * all the while creating function duplicates using the cartesian product algorithm(cpa()) * when creating a function duplicate, fill in allocation points with correct type (ifa_seed_template()) ## BACKWARD PHASE * determine classes to be duplicated, according to found imprecision points (ifa()) * from imprecision points, follow the constraint graph (backwards) to find involved allocation points * duplicate classes, and spread them over these allocation points ## CLEANUP * quit if no further imprecision points (ifa() did not find anything) * otherwise, restore the constraint graph to its original state and restart * all the while maintaining types for each allocation point in gx.alloc_info @@ -55,6 +55,6 @@ exploding early on. [1] plevyak's iterative flow analysis [http://www.plevyak.com/ifa-submit.pdf](http://www.plevyak.com/ifa-submit.pdf) [2] ole agesen's excellent Phd thesis [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.4969&rep=rep1&type=pdf](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.4969&rep=rep1&type=pdf) [3] Mark Dufour's MsC thesis [http://mark.dufour.googlepages.com/shedskin.pdf](http://mark.dufour.googlepages.com/shedskin.pdf)  - 
        
seanjensengrey revised this gist
Dec 8, 2016 . 1 changed file with 2 additions and 2 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -5,8 +5,8 @@ Copyright 2005-2013 Mark Dufour; License GNU GPL version 3 (See LICENSE) we combine two techniques from the literature, to analyze both parametric polymorphism and data polymorphism adaptively. these techniques are agesen's cartesian product algorithm [0] and plevyak's iterative flow analysis [1] '(the data polymorphic part)'. for details about these algorithms, see ole agesen's excellent Phd thesis [2]. for details about the Shed Skin implementation, see Mark Dufour's MsC thesis [3].  - 
        
seanjensengrey revised this gist
Dec 8, 2016 . 1 changed file with 2 additions and 0 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -36,10 +36,12 @@ gaps'. * propagate types along constraint graph (propagate()) * all the while creating function duplicates using the cartesian product algorithm(cpa()) * when creating a function duplicate, fill in allocation points with correct type (ifa_seed_template()) ## (BACKWARD PHASE) * determine classes to be duplicated, according to found imprecision points (ifa()) * from imprecision points, follow the constraint graph (backwards) to find involved allocation points * duplicate classes, and spread them over these allocation points ## (CLEANUP) * quit if no further imprecision points (ifa() did not find anything) * otherwise, restore the constraint graph to its original state and restart  - 
        
seanjensengrey revised this gist
Dec 8, 2016 . 1 changed file with 15 additions and 12 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -1,7 +1,7 @@ *** SHED SKIN Python-to-C++ Compiler *** Copyright 2005-2013 Mark Dufour; License GNU GPL version 3 (See LICENSE) [infer.py](https://github.com/shedskin/shedskin/blob/master/shedskin/infer.py): perform iterative type analysis we combine two techniques from the literature, to analyze both parametric polymorphism and data polymorphism adaptively. these techniques are agesen's @@ -30,17 +30,20 @@ would be to profile programs before compiling them, resulting in quite precise (lower bound) type information. type inference can then be used to 'fill in the gaps'. # iterative_dataflow_analysis(): ## (FORWARD PHASE) * propagate types along constraint graph (propagate()) * all the while creating function duplicates using the cartesian product algorithm(cpa()) * when creating a function duplicate, fill in allocation points with correct type (ifa_seed_template()) ## (BACKWARD PHASE) * determine classes to be duplicated, according to found imprecision points (ifa()) * from imprecision points, follow the constraint graph (backwards) to find involved allocation points * duplicate classes, and spread them over these allocation points ## (CLEANUP) * quit if no further imprecision points (ifa() did not find anything) * otherwise, restore the constraint graph to its original state and restart * all the while maintaining types for each allocation point in gx.alloc_info update: we now analyze programs incrementally, adding several functions and redoing the full analysis each time. this seems to greatly help the CPA from  - 
        
seanjensengrey revised this gist
Dec 8, 2016 . 1 changed file with 2 additions and 3 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -20,10 +20,9 @@ class duplicate nr, then function duplicate nr). the combined technique scales reasonably well, but can explode in many cases. there are many ways to improve this. some ideas: * an iterative deepening approach, merging redundant duplicates after each deepening * add and propagate filters across variables. e.g. 'a+1; a=b' implies that a and b must be of a type that implements '__add__'. a complementary but very practical approach to (greatly) improve scalability  - 
        
seanjensengrey renamed this gist
Dec 8, 2016 . 1 changed file with 6 additions and 6 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -7,8 +7,8 @@ we combine two techniques from the literature, to analyze both parametric polymorphism and data polymorphism adaptively. these techniques are agesen's cartesian product algorithm [0] and plevyak's iterative flow analysis [1] (the data polymorphic part). for details about these algorithms, see ole agesen's excellent Phd thesis [2]. for details about the Shed Skin implementation, see Mark Dufour's MsC thesis [3]. the cartesian product algorithm duplicates functions (or their graph counterpart), based on the cartesian product of possible argument types, @@ -47,10 +47,10 @@ update: we now analyze programs incrementally, adding several functions and redoing the full analysis each time. this seems to greatly help the CPA from exploding early on. [0] agesen's cartesian product algorithm [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.8177](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.8177) [1] plevyak's iterative flow analysis [http://www.plevyak.com/ifa-submit.pdf](http://www.plevyak.com/ifa-submit.pdf) [2] ole agesen's excellent Phd thesis [http://dl.acm.org/citation.cfm?id=237570](http://dl.acm.org/citation.cfm?id=237570) [3] Mark Dufour's MsC thesis [http://mark.dufour.googlepages.com/shedskin.pdf](http://mark.dufour.googlepages.com/shedskin.pdf)  - 
        
        
    
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,56 @@ *** SHED SKIN Python-to-C++ Compiler *** Copyright 2005-2013 Mark Dufour; License GNU GPL version 3 (See LICENSE) infer.py: perform iterative type analysis we combine two techniques from the literature, to analyze both parametric polymorphism and data polymorphism adaptively. these techniques are agesen's cartesian product algorithm [0] and plevyak's iterative flow analysis [1] (the data polymorphic part). for details about these algorithms, see ole agesen's excellent Phd thesis [2]. for details about the Shed Skin implementation, see mark dufour's MsC thesis [3]. the cartesian product algorithm duplicates functions (or their graph counterpart), based on the cartesian product of possible argument types, whereas iterative flow analysis duplicates classes based on observed imprecisions at assignment points. the two integers mentioned in the graph.py description are used to keep track of duplicates along these dimensions (first class duplicate nr, then function duplicate nr). the combined technique scales reasonably well, but can explode in many cases. there are many ways to improve this. some ideas: -an iterative deepening approach, merging redundant duplicates after each deepening -add and propagate filters across variables. e.g. 'a+1; a=b' implies that a and b must be of a type that implements '__add__'. a complementary but very practical approach to (greatly) improve scalability would be to profile programs before compiling them, resulting in quite precise (lower bound) type information. type inference can then be used to 'fill in the gaps'. iterative_dataflow_analysis(): (FORWARD PHASE) -propagate types along constraint graph (propagate()) -all the while creating function duplicates using the cartesian product algorithm(cpa()) -when creating a function duplicate, fill in allocation points with correct type (ifa_seed_template()) (BACKWARD PHASE) -determine classes to be duplicated, according to found imprecision points (ifa()) -from imprecision points, follow the constraint graph (backwards) to find involved allocation points -duplicate classes, and spread them over these allocation points (CLEANUP) -quit if no further imprecision points (ifa() did not find anything) -otherwise, restore the constraint graph to its original state and restart -all the while maintaining types for each allocation point in gx.alloc_info update: we now analyze programs incrementally, adding several functions and redoing the full analysis each time. this seems to greatly help the CPA from exploding early on. [0] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.8177 [1] http://www.plevyak.com/ifa-submit.pdf [2] http://dl.acm.org/citation.cfm?id=237570 [3] http://mark.dufour.googlepages.com/shedskin.pdf