These are some Python design patterns which I have found through research and through my own learning. Examples are attached with the code to help you see how this might in practice.
Last active
January 26, 2021 22:05
-
-
Save Salaah01/add03cd3bcf220808341e3945f64994d to your computer and use it in GitHub Desktop.
Python Design Patterns
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| """Chain of Responsibility Design Pattern Gist. | |
| Suppose we have content which we want to validate or pass some filters on | |
| before making the content public. | |
| It can be pretty tedious manually passing the content through different | |
| methods. | |
| This is a possible usecase for the chain of responsibility design pattern. | |
| We can design some filtration or validation methods and have a class that | |
| accepts a list of functions as arguments. | |
| The class would has a method which applies each method onto the content | |
| and then returns the content. | |
| """ | |
| def offensive_filter(content): | |
| """Replaces offensives words with "(beep)""" | |
| badWords = ['stupid', 'idiot', 'potatohead'] | |
| for badWord in badWords: | |
| content = content.replace(badWord, 'great') | |
| return content | |
| def remove_incriminating_terms(content): | |
| incriminatingPhrases = ['I am a criminal', 'I am guilty', 'I am bad'] | |
| for incriminatingPhrase in incriminatingPhrases: | |
| content = content.replace( | |
| incriminatingPhrase, | |
| incriminatingPhrase[:5] + 'not ' + incriminatingPhrase[5:] | |
| ) | |
| return content | |
| class ContentFilter: | |
| def __init__(self, filters=None): | |
| self._filters = list() | |
| if filters is not None: | |
| self._filters += filters | |
| def filter(self, content): | |
| for filter in self._filters: | |
| content = filter(content) | |
| return content | |
| content = 'I feel so stupid! I am a criminal' | |
| filter = ContentFilter([offensive_filter, remove_incriminating_terms]) | |
| filtered_content = filter.filter(content) | |
| print(content) | |
| # >> I feel so stupid! I am a criminal | |
| print(filtered_content) | |
| # >> I feel so great! I am not a criminal |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment