{ "cells": [ { "cell_type": "markdown", "id": "b30f0a7e-6d6e-4b9e-bece-e11f2a0f78fc", "metadata": {}, "source": [ "# Sentiment Analysis\n", "\n", "In this post we are going to learn more about the [technical requirements to become a Data Scientist](https://medium.com/@fmnobar/data-scientist-role-requirements-bbae1f85d4d5) by taking a closer look at Sentiment Analysis. In the field of Natural Language Processing (NLP), sentiment analysis is a tool to identify, quantify, extract and study subjective information from textual data. For example, \"I like watching TV shows.\" carries a positive sentiment. But maybe the sentiment could even be \"relatively more\" positive if one says that \"I really like watching TV shows!\". Sentiment analysis attempts at quantifying the sentiment conveyed in textual data. One of the most common use cases of sentiment analysis is enabling brands and businesses to review their customers' feedback and monitor their level of satisfaction. As you can imagine, it would be quite expensive to have human headcount read customer reviews to determine whether the customers are happy or not with the business, service, or products. In such cases brands and businesses use machine learning techniques such as sentiment analysis to achieve similar results at scale.\n", "\n", "Similar to my other posts, learning is achieved through practice questions and answers. I will include hints and explanations in the questions as needed to make the journey easier. Lastly, the notebook that I used to create this exercise is also linked in the bottom of the post, which you can download, run and follow along.\n", "\n", "Let’s get started!\n", "\n", "## Data Set\n", "\n", "In order to practice sentiment analysis, we are going to use a test set from [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/sentiment+labelled+sentences), which can be downloaded from [this link](https://gist.github.com/fmnobar/88703ec6a1f37b3eabf126ad38c392b8). \n", "\n", "Let's start with importing the libraries we will be using today, then read the data set into a dataframe and look at the top five rows of the dataframe to familiarize ourselves with the data." ] }, { "cell_type": "code", "execution_count": 46, "id": "b925656b-9706-487f-aa7c-1b8f4d771148", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
textlabel
0A very, very, very slow-moving, aimless movie about a distressed, drifting young man.0
1Not sure who was more lost - the flat characters or the audience, nearly half of whom walked out.0
2Attempting artiness with black & white and clever camera angles, the movie disappointed - became even more ridiculous - as the acting was poor and the plot and lines almost non-existent.0
3Very little music or anything to speak of.0
4The best scene in the movie was when Gerardo is trying to find a song that keeps running through his head.1
\n", "
" ], "text/plain": [ " text \\\n", "0 A very, very, very slow-moving, aimless movie about a distressed, drifting young man. \n", "1 Not sure who was more lost - the flat characters or the audience, nearly half of whom walked out. \n", "2 Attempting artiness with black & white and clever camera angles, the movie disappointed - became even more ridiculous - as the acting was poor and the plot and lines almost non-existent. \n", "3 Very little music or anything to speak of. \n", "4 The best scene in the movie was when Gerardo is trying to find a song that keeps running through his head. \n", "\n", " label \n", "0 0 \n", "1 0 \n", "2 0 \n", "3 0 \n", "4 1 " ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Import required packages\n", "import numpy as np\n", "import pandas as pd\n", "import nltk\n", "\n", "# Making width of the column viewable\n", "pd.set_option('display.max_colwidth', None)\n", "\n", "# Read the data into a dataframe\n", "df = pd.read_csv('imdb_labelled.csv')\n", "\n", "# look at the top five rows of the dataframe\n", "df.head()" ] }, { "cell_type": "markdown", "id": "74bda39c-042d-428f-8eae-94fbb104d347", "metadata": {}, "source": [ "There are only two columns. \"text\" contains the review itself and \"label\" indicates the sentiment of the review. In this dataset a label of 1 indicates a postivie sentiment, while a label of 0 indicates a negative sentiment. Since there are only two classes of labels, let's look at whether these two classes are balanced or imbalanced. Classes are considered balanced when classes (roughly) account for the same portion of the total observations. Let's look at the data, which makes this easier to understand. " ] }, { "cell_type": "code", "execution_count": 47, "id": "a424b8a5-fea6-42be-aa03-ee5d8e2e03f4", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "1 386\n", "0 362\n", "Name: label, dtype: int64" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df['label'].value_counts()" ] }, { "cell_type": "markdown", "id": "df9fbc49-2169-42e7-822d-9c2f6208d8d3", "metadata": {}, "source": [ "The data is almost equally divided between positive and negative sentiments, therefore we consider the data to have balanced classes.\n", "\n", "Next, we are going to create a sample string, which includes the very first entry in the \"text\" column of the dataframe. In some of the questions, we will apply various techniques to this one sample to better understand the concepts. Let's go ahead and create our sample string." ] }, { "cell_type": "code", "execution_count": 48, "id": "7898707d-df56-424a-a23a-b0bb53e84b0d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'A very, very, very slow-moving, aimless movie about a distressed, drifting young man. '" ] }, "execution_count": 48, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Take the very first text entry of the dataframe\n", "sample = df.text[0]\n", "sample" ] }, { "cell_type": "markdown", "id": "67ee7ecf-ac2d-4f6b-954c-7675782246ef", "metadata": {}, "source": [ "# Tutorial + Questions and Answers\n", "\n", "## Tokens and Bigrams\n", "\n", "In order for programs and computers to understand textual data, we start by breaking down larger segments of textual data into smaller pieces. Breaking down a sequence of characters (such as a string) into smaller pieces (or substrings) is called tokenization and the functions that perform tokenization are called tokenizers. A tokenizer can break down a given string into a list of substrings. Let's look at an example. \n", "\n", "Input: `What is a sentence?`\n", "\n", "If we apply a tokenizer to the above \"Input\", we will get the following \"Output\":\n", "\n", "Output: `['What', 'is', 'a', 'sentence', '?']`\n", "\n", "As expected, the output is a sequence of the tokenized substrings of the input sentence. \n", "\n", "We can implement this concept with the `nltk.word_tokenize` package. Let's see how this is implemented in an example.\n", "\n", "**Question 1:**\n", "\n", "Tokenize the generated sample and return the first 10 tokens.\n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 49, "id": "3f6e9d5c-43eb-4847-902a-e8421d149a30", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['A', 'very', ',', 'very', ',', 'very', 'slow-moving', ',', 'aimless', 'movie']" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Import the package\n", "from nltk import word_tokenize\n", "\n", "# Tokenize the sample\n", "sample_tokens = word_tokenize(sample)\n", "\n", "# Return the first 10 tokens\n", "sample_tokens[:10]" ] }, { "cell_type": "markdown", "id": "7de19d0b-c606-40cb-ba5a-6a98873cf98d", "metadata": {}, "source": [ "A token is also called a unigram. If we combine two unigrams, we get to a bigram (and this process can continue). Formally, a bigram is an n-gram where n equals two. An n-gram is a sequence of n adjacent items from a given sample of text. Therefore, a bigram is a sequence of two adjacent elements from a string of tokens. It will be easier to understand in an example:\n", "\n", "Original Sentence: `What is a sentence?`\n", "\n", "Tokens: `['What', 'is', 'a', 'sentence', '?']`\n", "\n", "Bigrams: `[('What', 'is'), ('is', 'a'), ('a', 'sentence'), ('sentence', '?')]`\n", "\n", "As defined, each two adjacent tokens are now represented in one bigram.\n", "\n", "We can implement this concept with the `nltk.bigrams` package.\n", "\n", "**Question 2:**\n", "\n", "Create a list of bigrams from the tokenized sample and return the first 10 bigrams. \n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 50, "id": "f7863a99-e1bb-4975-84f6-2092e3400eac", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[('A', 'very'),\n", " ('very', ','),\n", " (',', 'very'),\n", " ('very', ','),\n", " (',', 'very'),\n", " ('very', 'slow-moving'),\n", " ('slow-moving', ','),\n", " (',', 'aimless'),\n", " ('aimless', 'movie'),\n", " ('movie', 'about')]" ] }, "execution_count": 50, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Import the package\n", "from nltk import bigrams\n", "\n", "# Create the bigrams\n", "sample_bitokens = list(bigrams(sample_tokens))\n", "\n", "# Return the first 10 bigrams\n", "sample_bitokens[:10]" ] }, { "cell_type": "markdown", "id": "45b46bab-dc89-459a-b0ac-8d0e35f35902", "metadata": {}, "source": [ "## Frequency Distribution\n", "\n", "Let's go back to the tokens (unigrams) that we created from our sample. It is good to see what tokens are out there but it might be more informative to know which tokens have a higher representation compared to others in a given textual input. In other words, an occurrence frequency distribution of tokens would be more informative. More formally, a frequency distribution records the number of times each outcome of an experiment has occurred.\n", "\n", "Let's implement a frequency distribution using `nltk.FreqDist` package. \n", "\n", "**Question 3:**\n", "\n", "What are the top 10 most frequent tokens in our sample?\n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 51, "id": "ccf1dce2-e0c7-4226-8d82-77aa0c5f5b04", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(',', 4),\n", " ('very', 3),\n", " ('A', 1),\n", " ('slow-moving', 1),\n", " ('aimless', 1),\n", " ('movie', 1),\n", " ('about', 1),\n", " ('a', 1),\n", " ('distressed', 1),\n", " ('drifting', 1)]" ] }, "execution_count": 51, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Import the package\n", "from nltk import FreqDist\n", "\n", "# Create the frequency distribution for all tokens\n", "sample_freqdist = FreqDist(sample_tokens)\n", "\n", "# Return top ten most frequent tokens\n", "sample_freqdist.most_common(10)" ] }, { "cell_type": "markdown", "id": "1dcdff45-c55b-4dfe-a24c-07c2404722cd", "metadata": {}, "source": [ "Some of the results intuitively make sense. For exmaple, a comma, \"the\", \"a\" or periods can be quite common in a given textual input. Now let's put all of these steps into one Python function to streamline the process. If you need a refresher on Python functions, I have a post with practice questions on Python functions [linked here](https://medium.com/@fmnobar/python-foundation-for-data-science-advanced-functions-practice-notebook-dbe4204b83d6).\n", "\n", "**Question 4:**\n", "\n", "Create a function named \"top_n\" that takes in a text as an input and returns the top n most common tokens in the given text. Use \"text\" and \"n\" as the function arguments. Then try it on our sample to reproduce the results from the previous question. \n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 52, "id": "26b44a2a-d91a-4282-8a0d-fbc3d9314084", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[('the', 2),\n", " ('Not', 1),\n", " ('sure', 1),\n", " ('who', 1),\n", " ('was', 1),\n", " ('more', 1),\n", " ('lost', 1),\n", " ('-', 1),\n", " ('flat', 1),\n", " ('characters', 1)]" ] }, "execution_count": 52, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Create a function to accept a text and n and returns top n most common tokens\n", "def top_n(text, n):\n", " # Create tokens\n", " tokens = word_tokenize(text)\n", " \n", " # Create the frequency distribution\n", " freqdist = FreqDist(tokens)\n", " \n", " # Return the top n most common ones\n", " return freqdist.most_common(n)\n", "\n", "# Try it on the sample\n", "top_n(df.text[1], 10)" ] }, { "attachments": { "938d222b-5a6d-4e1d-960f-9bcf1a130b85.png": { "image/png": "iVBORw0KGgoAAAANSUhEUgAACPgAAADACAYAAACTO0+OAAAKrWlDQ1BJQ0MgUHJvZmlsZQAASImVlwdQU1kXgO97L73QEiIgJfQmSCeAlBBaAAHpYCMkIYQSY0JQsSuLK7gWRERAXZBVEQXXAshaEFFsi2LvG2QRUNfFgg2V/wFD2N1//v+f/8ycd7933rnnnHvn3pnzAKDSeVJpFqwBQLYkRxYd7M9MTEpm4p8BMtACeEAFajy+XMqOigoHqEyMf5f3dwA0Ot60G43179//q2gKhHI+AFAUyqkCOT8b5WOovuJLZTkAIDWo3XRRjnSUL6JMl6EFovxolEXjPDjKqWOMwYz5xEZzUNYBgEDh8WQiAChmqJ2ZyxehcSgBKDtIBGIJyug78MnOXiBAGc0LrFAfKcqj8Vmpf4kj+lvMVFVMHk+k4vG1jAkhQCyXZvGW/J/b8b8lO0sxkcMCVUq6LCQaHRnont3LXBCmYklqROQEiwVj/mOcrgiJm2C+nJM8wQJeQJhqblZE+ASniYO4qjg53NgJFsoDYyZYtiBalStNxmFPME82mVeRGaeypwu5qvh56bEJE5wrjo+YYHlmTNikD0dllymiVfULJcH+k3mDVGvPlv9lvWKuam5OemyIau28yfqFEvZkTHmiqjaBMCBw0idO5S/N8VflkmZFqfyFWcEquzw3RjU3Bz2Qk3OjVHuYwQuNmmAQDgJBBGCCKOAMnIAU2KFPBwByhItHzyjgLJAukYlF6TlMNnrLhEyuhG8/jenk4OQEwOidHT8Sb++N3UWIQZi0rb4OgOdzFGombREdADQrANCImLRZ+KLHqQqANi++QpY7bhu9TgALSEAd0IEuMASmwGqsMjfgBfzQikNBJIgFSWAe4IN0kA1kYBFYBlaDAlAENoNtoBzsBnvAfnAIHAFN4CQ4Cy6AK+A6uA0eAiXoBS/AIHgPhiEIwkNUiAbpQkaQOWQLOUEsyAcKhMKhaCgJSoFEkARSQMugtVARVAyVQ1VQLfQzdAI6C12CuqD7UDc0AL2BPsMITIHpsAFsAU+HWTAbDoNj4bmwCF4I58H58Ea4DK6GD8KN8Fn4CnwbVsIv4CEEIGSEgRgjdggL4SCRSDKShsiQFUghUopUI/VIC9KB3ESUyEvkEwaHoWGYGDuMFyYEE4fhYxZiVmA2YMox+zGNmHbMTUw3ZhDzDUvF6mNtsZ5YLjYRK8IuwhZgS7F7scex57G3sb3Y9zgcjoGzxLnjQnBJuAzcUtwG3E5cA64V14XrwQ3h8XhdvC3eGx+J5+Fz8AX4HfiD+DP4G/he/EcCmWBEcCIEEZIJEsIaQinhAOE04QahjzBM1CCaEz2JkUQBcQlxE7GG2EK8RuwlDpM0SZYkb1IsKYO0mlRGqiedJz0ivSWTySZkD/Isspi8ilxGPky+SO4mf6JoUWwoHMocioKykbKP0kq5T3lLpVItqH7UZGoOdSO1lnqO+oT6UY2mZq/GVROorVSrUGtUu6H2Sp2obq7OVp+nnqdeqn5U/Zr6Sw2ihoUGR4OnsUKjQuOExl2NIU2apqNmpGa25gbNA5qXNPu18FoWWoFaAq18rT1a57R6aAjNlMah8WlraTW087ReOo5uSefSM+hF9EP0Tvqgtpa2i3a89mLtCu1T2koGwrBgcBlZjE2MI4w7jM9TDKawpwinrJ9SP+XGlA86U3X8dIQ6hToNOrd1PusydQN1M3W36DbpPtbD6NnozdJbpLdL77zey6n0qV5T+VMLpx6Z+kAf1rfRj9Zfqr9H/6r+kIGhQbCB1GCHwTmDl4YMQz/DDMMSw9OGA0Y0Ix8jsVGJ0Rmj50xtJpuZxSxjtjMHjfWNQ4wVxlXGncbDJpYmcSZrTBpMHpuSTFmmaaYlpm2mg2ZGZjPNlpnVmT0wJ5qzzNPNt5t3mH+wsLRIsFhn0WTRb6ljybXMs6yzfGRFtfK1WmhVbXXLGmfNss603ml93Qa2cbVJt6mwuWYL27rZim132nZNw07zmCaZVj3trh3Fjm2Xa1dn123PsA+3X2PfZP9qutn05OlbpndM/+bg6pDlUOPw0FHLMdRxjWOL4xsnGye+U4XTLWeqc5DzSudm59cuti5Cl10u91xprjNd17m2uX51c3eTudW7Dbibuae4V7rfZdFZUawNrIseWA9/j5UeJz0+ebp55nge8fzTy84r0+uAV/8MyxnCGTUzerxNvHneVd5KH6ZPis+PPkpfY1+eb7XvUz9TP4HfXr8+tjU7g32Q/crfwV/mf9z/A8eTs5zTGoAEBAcUBnQGagXGBZYHPgkyCRIF1QUNBrsGLw1uDcGGhIVsCbnLNeDyubXcwVD30OWh7WGUsJiw8rCn4TbhsvCWmfDM0JlbZz6KMI+QRDRFgkhu5NbIx1GWUQujfpmFmxU1q2LWs2jH6GXRHTG0mPkxB2Lex/rHbop9GGcVp4hri1ePnxNfG/8hISChOEGZOD1xeeKVJL0kcVJzMj45Pnlv8tDswNnbZvfOcZ1TMOfOXMu5i+demqc3L2veqfnq83nzj6ZgUxJSDqR84UXyqnlDqdzUytRBPoe/nf9C4CcoEQwIvYXFwr4077TitH6Rt2iraCDdN700/aWYIy4Xv84Iydid8SEzMnNf5khWQlZDNiE7JfuEREuSKWlfYLhg8YIuqa20QKpc6Llw28JBWZhsrxySz5U359DR5uiqwkrxnaI71ye3IvfjovhFRxdrLpYsvrrEZsn6JX15QXk/LcUs5S9tW2a8bPWy7uXs5VUroBWpK9pWmq7MX9m7KnjV/tWk1Zmrf13jsKZ4zbu1CWtb8g3yV+X3fBf8XV2BWoGs4O46r3W7v8d8L/6+c73z+h3rvxUKCi8XORSVFn3ZwN9w+QfHH8p+GNmYtrFzk9umXZtxmyWb72zx3bK/WLM4r7hn68ytjSXMksKSd9vmb7tU6lK6eztpu2K7siy8rHmH2Y7NO76Up5ffrvCvaKjUr1xf+WGnYOeNXX676ncb7C7a/flH8Y/3qoKrGqstqkv34Pbk7nlWE1/T8RPrp9q9enuL9n7dJ9mn3B+9v73Wvbb2gP6BTXVwnaJu4OCcg9cPBRxqrrerr2pgNBQdBocVh5//nPLznSNhR9qOso7WHzM/VnmcdrywEWpc0jjYlN6kbE5q7joReqKtxavl+C/2v+w7aXyy4pT2qU2nSafzT4+cyTsz1CptfXlWdLanbX7bw3OJ5261z2rvPB92/uKFoAvnOtgdZy56Xzx5yfPSicusy01X3K40XnW9evxX11+Pd7p1Nl5zv9Z83eN6S9eMrtM3fG+cvRlw88It7q0rtyNud92Ju3Pv7py7ynuCe/33s+6/fpD7YPjhqkfYR4WPNR6XPtF/Uv2b9W8NSjflqe6A7qtPY54+7OH3vPhd/vuX3vxn1GelfUZ9tf1O/ScHggauP5/9vPeF9MXwy4I/NP+ofGX16tiffn9eHUwc7H0tez3yZsNb3bf73rm8axuKGnryPvv98IfCj7of939ifer4nPC5b3jRF/yXsq/WX1u+hX17NJI9MiLlyXhjrQCCKpyWBsCbfQBQkwCgoX0FafZ4Tz0m0Ph/wBiB/8TjffeYuAFQ3wpA5Cq0u/EDoA5VC5QpqEahHOsHYGdnlU70v2O9+qg41KO/DNWj9NDbCfxTxvv4v9T9zxGMRnUB/xz/Bd4XBP8F+DmQAAAAimVYSWZNTQAqAAAACAAEARoABQAAAAEAAAA+ARsABQAAAAEAAABGASgAAwAAAAEAAgAAh2kABAAAAAEAAABOAAAAAAAAAJAAAAABAAAAkAAAAAEAA5KGAAcAAAASAAAAeKACAAQAAAABAAAI+KADAAQAAAABAAAAwAAAAABBU0NJSQAAAFNjcmVlbnNob3RPAhDiAAAACXBIWXMAABYlAAAWJQFJUiTwAAAB12lUWHRYTUw6Y29tLmFkb2JlLnhtcAAAAAAAPHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNi4wLjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWURpbWVuc2lvbj4xOTI8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFhEaW1lbnNpb24+MjI5NjwvZXhpZjpQaXhlbFhEaW1lbnNpb24+CiAgICAgICAgIDxleGlmOlVzZXJDb21tZW50PlNjcmVlbnNob3Q8L2V4aWY6VXNlckNvbW1lbnQ+CiAgICAgIDwvcmRmOkRlc2NyaXB0aW9uPgogICA8L3JkZjpSREY+CjwveDp4bXBtZXRhPgqnAe9CAAAAHGlET1QAAAACAAAAAAAAAGAAAAAoAAAAYAAAAGAAAEgUr8MHggAAQABJREFUeAHs3Qn8XNPdx/FDaKwRxFJUEqkl0jRaS2sJsZWWErVFqVrSEoTIwlOCECFCNGiLilgqWinRisQSRFsEsYU+5UErpG3SBy0lkhBPnvme9tyee/53ln/unZk7M5/zeiUzd5l7z3nf7fzn/OaclZYXkiEhgAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIBALgVWIsAnl8eFTCGAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAgggYAUI8OFEQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAgxwIE+OT44JA1BBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQI8OEcQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAgxwIE+OT44JA1BBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQI8OEcQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAgxwIE+OT44JA1BBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQI8OEcQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAgxwIE+OT44JA1BBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQI8OEcQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAgxwIE+OT44JA1BBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQI8OEcQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAgxwIE+OT44JA1BBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQI8OEcQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAgxwIE+OT44JA1BBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQI8OEcQKAOAgsXLjQTJ040c+fONYceeqgZMGBAHXLBLhFAAAEEEEAAAQTaK0A9rr1irI8AAggggAACCCCAAAIIIIAAAggggAACCCCQhQABPhUqTp482YwdOzZa+7TTTjMnnXRSNM2b9AIXX3yxueOOO6INjR492vTv3z+abqY3Q4YMMffcc09UpClTppgddtghmm72NwcddJB54403omJOmzbNdOvWLZrmDQJ5FViwYIH52te+FmVvww03NA8//HA03apvcGnVI9+45S53zt53333mrLPOigqo+ojqJXlKO+64o1myZInN0mqrrWYeffRRs+aaa+Ypi3XLS7njl7bO2er1uLodWHYcCXD9RxS8QQABBBBAAAEEEEAAAQQQQAABBBBAoKUECPCp8HDfeOONZsyYMdHaJ554ojn33HOjad6kF/iv//ovo0AXly699FJz5JFHusmmeu3du7dZtGhRVKZhw4aZU089NZpu9je77LKL0a/fXZo5c6bp0aOHm+QVgdwK/PnPfza77757lD81pr/00kvRdKu+waVVj3zjlrvcOfurX/3KDB06NCqgAlMnTJgQTefhzRZbbBHLxrPPPmvWXXfd2LxWnSh3/NLWOVu9Hteq51Weys31n6ejQV4QQAABBBBAAAEEEEAAAQQQQAABBBConQABPhVaE+BTIVSK1dI2tqTYdc0/Om7cOHPddddF+33ooYdM+EV9tLAJ3xDgU9lBffnll42CCV363Oc+F+vlys1vtNe8lqtfv37m448/jjjVQKweevxULijAX7cZ3ld6rFrNpRmObauXodw5Wy5AJA9+Yb2BAJ//HJVyxy9tnbPV63H/kW7sd5U+4/JYSq7/PB4V8oQAAggggAACCCCAAAIIIIAAAggggED1BQjwqdCYAJ8KoVKslraxJcWua/7RTz75xKjXmtdee83st99+Zptttql5Huq5QwJ8KtN/8cUXY8PUbb755nYIlso+nd+18lqusLHsiSeeMBtvvHEMslxQQGzlJpio9Fi1mksTHNqWL0K5c7ZcgEgeAMN7FgE+/zkq5Y5f2jpnq9fj/iPd2O8qfcblsZRc/3k8KuQJAQQQQAABBBBAAAEEEEAAAQQQQACB6gsQ4FOhMQE+FUKlWC1tY0uKXfPRGgsQ4FMZeCM3PJUqYV7LFTaWEeBjTKXHqlywRKnzgWUI1EOg3DlbLkCkHnkO9xneswjw+Y9QueNHnfM/Vq38rtJnXB6NuP7zeFTIEwIIIIAAAggggAACCCCAAAIIIIAAAtUXIMDHM/7b3/5m7r77bvOnP/3JLF682Gy55ZZm7733Nr169TLtCfBRrywzZswwr7/+uvnf//1f06lTJ7Ppppua3Xff3eyxxx6mQ4cO0V4/+ugj87Of/Sya1puBAwfG1nn66afN888/H62z8847my9+8YvRtPJ66623RtNrr722+fa3v22np02bZv76179Gy77+9a+bDTbYwMyePduo8foPf/iD2WyzzcwXvvAFc/jhh5vVV189WrfWb5IaWw499FDz1FNP2fy+8MILdhir7bff3uy6666mS5cuRbP46aefmt/97ne2t5P58+ebDz/80B6Dbbfd1hx00EFtht0puqEqLbj33nvNX/7yl2jr+++/v+natWs0rTfLly83Tz75pNHwAa+88op58803Tbdu3WxvP+rx56tf/apZaaWVYp+px4TypmOj11dffdWsv/76No9bbbWV2W233RLPqaQAHx1PnZM6N3UN9unTx+ywww5G5/tqq61WtGi6hnS9PfPMM0YNtiuvvLLZZJNNrI96R6rnOe1nuj1OOua//e1vzYIFC2LXtrZ39tln283qfNF5o6RrXNe6SxpW6pBDDjG///3vzWOPPWbmzp1r1llnHXPppZfGzpm010l77dtbLleear6+99570bBnl112WWxXJ5xwgr1f6vw75phj7H25WFCA5j/++OPWWy477rijPX91DpdK7rmj80PHW88L3ZP32Wcfew77z4tS28l6WXuPVdYu//3f/22mT59u5s2bZ95++22z0UYb2fvfN7/5TbP11lsnFreRnne33XabWbRoUVSOI4880nTu3Dma1hvd1/T8ckn3Mz0DlIpd87L69a9/bXuH03moXr++9rWv2fup/WAd/rvjjjuMrjOXDjjgAHuOu+m33nrL3HfffW7SPgvdvc3NnDJlivnHP/7hJm0dR3Udl5YsWWLvmap/6Z+uJT2DdK6oDvelL33JrRq9Fjtn3QrlAkT0jFbZ3n//ffcRe572798/mtabNNf4//3f/9l6jJ6LqjPo3q7y6DrQfaneDfw6x6pZh3WQOq91f9WxVT1j6dKlUT1D99qw/qTPlTt+SXVOXYcuqd54++2327qYm6fzaKeddrKTpepxxa5P/V2gY6nn8rJly+z5ue+++yaen26fOgd+85vf2M9pu+utt549Bw4++GDTsWNH89Of/tStas8J96yKZlb5jXtW+Ls57rjjbN78ebpOfvGLX0SzVH9VXtdYY41oXpo6SXvrzC7fldazokwW3qzINa3gO9VTXdpuu+3scZ8zZ449t9944w17bh1//PFuFZPV9a9rRvv/4x//aK8f1Yv195DukfpbYt1114326deJ3Mykv1G0bPLkyfbvK7eezuXwnuSW8YoAAggggAACCCCAAAIIIIAAAggggAAC2QkQ4PNvy1mzZpkTTzwxUfbkk0+2QQtjxoyJlmvdc889N5p2by655BIzceJEN9nmtXv37kYNVQqCUNKXtwpk8Bv67r//fvulq/vw9773PfPwww+7SXPEEUeYsWPHRtPPPfecOeyww6JpNUDcdddddlqBPgoScUmBSjfddJNtXHDz3KvypmVqEKxHChtbZKnApl/+8pdtsqNhcxSMpUbfMKlRRo0G+lVusXTdddfZRs9iy6s9X4EDjz76aLQbHZc999wzmv773/9uAzn84x4t/PcbNfbqPFDgRj2SGqeuvvpq86Mf/ajo7hUkp+V69VMY4DN16lRbXjXehUku119/vVlllVXCRbahQkFg/vXjr6Tr7M4770xs/PPXq+b7FXHSOe8CeYrlTUF+asBUUhDggAEDolV1D9B0uA35uoCRtNeJGozaa9/eckUFquIbBZMpmKZcUrCUGiKTggJ0vw0DEtz2zjrrLKNnSFJSIMaZZ56ZtMjOU8P1zTffXJcgtfYeq6xc1Eh7wQUXGAXAFEsnnXRSm3Nb6zbS807H9t13342K+NBDD7VpGA2fE6pb7LXXXvYzSde8zjPZJCVta+TIkUmLqj5vyJAh5p577on2M3r0aHP00UdH05MmTTIXX3xxNK3nu4I9XdJQSGFQ10svvWTWXHNNu4rqQCNGjDBqIC+WVP4f/OAH0f1P6yWds9quS+UCRFSPGDdunFvdvt5yyy2mb9++0bw017jKrbqMAgDCpHqazgcFb/mp1j34VLsOq7IpcO+cc84p+pzXOj/84Q+NAl78VO74hXVOBcC6AB8Fmnz/+983+tvAJdUndDwVQKwUXp9+PS7p+hw2bJg9nm57/uvQoUPNaaed5s+y7xXIdOyxxxY9BxTcEz573LOqzcaqNEP1CtVH/aTADwVn+0l/25xyyinRLF2/quO7ul2aOsmK1Jnb+4xzGV/Ra1p14SuvvNJtxgwePNjeg/S3jEuqG7tnXxbXv34Acs011xjdq4olndeqZyhwUEnnvoJ+/OeT7uGnn356bBP6AYvW85OCcnv27OnP4j0CCCCAAAIIIIAAAggggAACCCCAAAIIVEGAAJ8CqnpGOfDAA0s2HqjHFPWw4FJSgM9VV11l9K9c0rZ+/vOfR4EZaphyATn6rL4Adr8A1xetYXBE2Pil3ntGjRoV7db/IjZs8FQQT6lGMPVYdMMNN0TbquWbsLGlXF7lqEZD1zigvKoxRMfGbxwsVgZ9ia4v0+uRSjUMqcFMjbjq1aBc0rmhRpN69OSjBjf/19il8qrelNSLlUthgE+5Y60GLv8c13bUOKteavxGCLd9/1XXiwKI9FqPtCJO7W14ChsT1WCT5OICfNJeJytq395y1eJ4pQ3wUR51bi1cuLBodn/84x8b9Z7mJwX46T5QLvXr188GuK266qrlVs10eXuPVRgsocysiIuef6WCBl0h1Vh+6qmnukn72kjPu6wDfIpd8z7QFVdcYb71rW/5s2ryXkGWCnRzKQxSVlDSzJkz3WL7qh7M1JOVknpzUo81LslOPeco/c///E+ba8utF77qeTF+/PhodnjOKuCg0gCfBx98sE3gnl93007SXuN65vm9M0YZ//cbPf/DoNhaB/goK9Wsw6qunBRMH1poWj2wqSdKl9IE+CjgTIFnLuncUD1dPZ64VKoel/RMVi9TxYKRtU0FjvTu3dtt3r4qSFf34mIpqe5U6wAf5e0b3/hG7G+kQYMG2fPCz7eOo46nSwpeu+iii+xkmjrJitaZ2/uMU0bTXNNhgI+C9MJ6vh/gk8X1ryAiBd1UktRjnP6uUtIPWhSw5pIfUO7mKfDujDPOcJNG52KpHyVEK/IGAQQQQAABBBBAAAEEEEAAAQQQQAABBFILtHyAjwJo9MV02EgiWQ1bpS/Kk1IY4KMhCtTrgJ/05a16hlBPF+qO30/qYUM9+ag3jbARQl3bn3/++Xb1sGHLbUPbc78iVkOn/wtQbdcNCxM2eLrPa5grDfeQ1Cj9yCOPRMOAuPVr8RoG+Lh96ovlDz74IDEwScN6+L/sD7/MVqOMegpQQ6GCTMJGxAceeKBNAJXbbzVfSzUMqechF+ClPKgMagTREBQaakmNiH4jkRopdDxrmT7++GO7Tz8fOg/VOKEhABQk5l9TagBUg49LYYCPm69fA6uB1d+ulslALi6Q6Z///Kcdas1vHFHjgnqUUcCXhs7wr10tUyNHqaG+XB6yfF1RJ/XYoMZEBemEDSZqHFfSOe1+8R82JoZl0L1I29L5oyHM0lwnaezbW66wHNWY1tAvrtFf904/aSghnXtKF154oR1yJAwKcOtrPd3Xw3u9lusZ4wet6NzUUIF+0nB2Gt5CQ38oeMEP0FKjdTh8mP/Zarxv77HKwkXD4YS9zKjs6uVOAbauZwNXXh03BW241EjPu6wDfJyBXnW/U+8K4X1Uy+rxzNMwYxqe1CW/IVh1MA1VE+ZVwdIuqCcMEBo+fHjUE8h5551nh4nxt61gus985jNGvSL5zwGto2FJ3XMgPGd1DVcS4JN0/Sp4QfVCl5LWac81rsZ2d39329SrgubUs4d/f/CX1yPAp1p12KTADdUR5KgeW2Tk12H94AiZhPnSPXfChAkRV1jndD34qPcZnVd+SqpnlarHFXsmKxBPgVl+z5puP2EAmp5HymOYVCdW3cL/0YG/js49f9grf1m13qs3KfW66ZICRXR8XFLPbBrazD9vdb93PcCkqZOsaJ25vc+4tNd0GODjbPSqe4+GHFQvOqo/Z3H96288vzcx7Ud/W6iHHf1dqh8H+Pdd/76quqL/bNVnw3tLGNjn/7hE65MQQAABBBBAAAEEEEAAAQQQQAABBBBAoHoCLR/go2AENar6SY26akzVF+TqMl5DqISN7H6AT1J35mpo0FANbjicsIFK+3M9yCxYsMDsuuuuURb8X0omBQ5pRb/xSz08+IEO+tJfjVtKSQ2e+tWlvkRWvvUlsv8LTH1Gv1rWNmudwsYW7d/Pi76Md8MnuLz5wzIk9cQR/iI67E1FDTRhg7LbdjVfSzUMadgrvxFK558aP1zSL4j9Hp/UI4M/vJdbr5qvapzSueWSGmnUWOOSGnTVW4Qa6JQULOcPHRMG+KjRS41x6uVHjZe6dvzGIm3D7wUo/OW1Alj0+c6dO2tV89FHH9lh6/wGsHoMy5bWKWy4Ujl1/MOU1JgoU/Uc8+Uvf9kGPamBTQFSaa+TLOwrLVdYzmpPb7HFFrFdqCewsOenMChAH9D5ryEudN99//337T3F/9W8tuH3KhYGZeqZo3u6gq+UFBwXDnmiY9ylSxe7vJb/VXqs0rokPUcV2KheHlwKG+zVUK6AFZca6XlXjQAfNRLLqEePHvb5rsBfv+ccOekZOHDgQEdWs9ewnqLzaq211rJBCgqAC5MCczWUl5ICnv3gLj3/FEynpPrLP/7xD/u+Y8eORkEa7jpREITqVsWCCsJztpIAH9VTFHjkb1NDOYWBGGmv8XBYMxVQgX86b3QvVyDhd7/7XVtu/7+wEd5fVq331arDKthRQQQudevWzQbSu3q1nu/huaOg+NVXX91+JLxfVBLgo8DZ73znO26X9lXDYCUN41iqHpf0TFbAxOWXX27v8wrAU71O9VqXVE/yh7ILh+fV+angZQV7KyUFnLj5tQ7wSQommT17djSMbvhMU/1E9SMdy7R1krR15kqfcWmv6aQAHwXn65zQMVX9zNXTsrj+VWf2e2TVcH7+81T3VPdjEp03fs9oyof+rvD/tlT+3fWmun04vHQ9gkeVbxICCCCAAAIIIIAAAggggAACCCCAAAItKVD4Eq+lU+EXjMsLvyaP/u28887LC0E9MZP33ntveaFHnGgdrV/ovj9a5/nnn48t0/LCl93RcvfmqKOOiq1XaLxyi5YXhmSKLSt0V2+XFYYgieb76xR+XWyXK29+/gsNDtE29Sbc57hx42LLC42qywuNCrFtFH69HFunVhOFoQhi+Sj8er3Nrg8++ODYOn55Cl9kx5b94Ac/aPP5Qm8dsXV0vOuRjj/++Fg+Cr0mRdkoBLfElimPhS/OlxeCVqJ16v2m0LAUy6POwUKAw3L5VpJUJv+8LTQ0xD5WCJKILde6hcCWaB2d5/7nCw150TL3Jry2k84Ht261XtM6FX5FHStnoReMxKw+9dRTsfVkU2i0Slw37XWShX2l5UosQBVn+ueU3hcartvsrRC81sa6MExQbD2dq+G23HNl2bJlbZYlPS8KPQbF1ks6x2M7rdJEpccqrcsLL7wQK2+xe7P/HJRxIUgjKnkjPe/COkUh2Dgqh3sTPicKgcZu0fKka74wrFW03L0pBPTEXIcOHeoW1fS10MNhLB+FwAa7f9373bXiH1v/XlcIqInW0bqFINCK814I4I19thCgGH02PGdVF/JTIUAq9lnd+8K8yFP1KD9lcY37FipzYagufxf2/fXXXx/Ln9Yr9GzTZr1azAjzm1UdtlzedZ6480evhSFoo4+Ex68QDBYt05uwzqlzNKwPF4KqYp/xJ8Lr06/HJV2fhd7v/I8vV13bz3t4/oX3iEKPlbHPa6Iw5FVsG9peoVeWNuvVYkahd8FYXuTvUliv9f/+SVsnCbfd3jpzJc+4LK7pa665Juaj413sWIXXUzWuf9Vv/PNP55ufCkHHseW6XlwK67aF4CG3iFcEEEAAAQQQQAABBBBAAAEEEEAAAQQQqIFAy/fgE/7y87DDDjOFoJE2wV5h9/F+Dz5hV+r6Zar/q1y3MfU2U/iC102avffeO/p1ZSFgyPZW4xbqV7zqZUe9+bgu1NUbRyHgx65S+FLW9iqkX8jql+4uaZgw/1fdYY8G/i8w3WfCX4pqKJrwF8xu3Wq+hj34FAIyjH7B7KfQ0D8Oo0aNMoUvwf3V7VAlsRmFiUIDUGyWflnsfhEeW1DFiUJDoSkEAUR7uPHGG6NeeIoNy6aVdT4UvoS3546GO6h1vl2G1dtG0rAqWq5zU79K3n333W2Z9KvzMIU9+Pg9Mrh1NdxWIXjOTRr/V/S6dsLjqP36KVyu/KiXlVqmtE6V/rI87C1A5vqsG9LML3Pa6yQL+0rL5ee7Fu9XtAef8B6iHqQKjXexLLueNdQrRaEBMrZMPTOF13J4/qonlpNPPjn2uVpMVHqswt5QlLf2uITPUX0+vKY1L3RR73jqpUqpkZ531ejBJ/SWiXof9J+jYa9HWqcW6cEHH4ydvxoCqRAgERsuUL1Y+T0Kqn7TqVMnWxdyeVSPi4XAFjcZvWqopueee872avLOO+/YYT3Vm1Y4FJLqd6rnKYXnbLkefKKdeW801I2GhfRT2mt8yZIlZtttt/U3aesLuk/4Scc77OnL3Wf89Wrxvlp1WOVdvYWot00dX5VZPTZpqMhCgLsdNsgv36xZs6Iebtrbg4+/Hb1XDyaqmxVLpepx4TPZ1dn9bSX1IKrebJTUw48bvsp9Rn9X6O8LP7355ptR3dHNV88+te7BR/suBBsZDVXnkj/kWGjl37fT1knS1pkrecalvaZlEvbgo57AdM8LU9bXv3oyk5HuDeqRR9eO/mk4Qr8nsvBv1/D+ouU6r1WvVM9A6i3NpXrVT9z+eUUAAQQQQAABBBBAAAEEEEAAAQQQQKDVBFo+wCcM3FGQiYZbCFMYCOQHloTDaO22225tAk20valTp5rhw4dHm/aH4ir88jc2bMbYsWNto6W6VFdSY7GGZ1DQj0tquNIQVP6XrGok3WabbdwqbRo8lVd/ODCtGH65npcAH5UrHJJLjS1jxoyJyucfh1NOOcUUem2JllX6RkEk66yzTqWrZ7Je2NjhB/hoBxqiSsehVFJjpLr233///UutVrVlamxzDaWldlLoickoiMwNP6R1wwCfmTNn2mFl/O3o2KqxziU/wCcMxHDrlHrVdaHro9YpjVMlDU8qT9iYqOFr/GHc/DKnvU6ysK+0XH6+a/E+LFslQ3SFQQEun+G2XMN70rAy7jOlXpOGASq1flbLKj1W5YIlXH6KuYTPUbd+udeJEyeaQm8HdrUwwCfPz7usA3yKBe4kBREkBQKVc067XEEZCvx0qX///mb8+PGxIGY9iwcNGhQF5Sio+bOf/azRMJQuhcO2KQBaz0oFDFSSsg7wSRqWJu01rqCh8LmedMw0nOXWW28dK7a7z8Rm1mCiWnVYBbAo6Ev3oUqS8qGhvJTSBvgoKEdDYrkhv+xGvf9K1eMqeSYnBfG4AJ/wvqvd6loOg3aTgkHqFeBT6D3KBqA7Ij0bCz2zGQWY+H+7KFCk0MNRVCdMWyfR/tLUmUPrpKFQ017TymMY4KO/PzUEb5iyvP4ff/xxW//2A3nC/bnpMMBH8zUkl8ruUqEXKXvP0VBf/rCj/nXn1uUVAQQQQAABBBBAAAEEEEAAAQQQQAABBKon0PIBPueff74pDBERCatxacSIEdG0exP+OtkPLFHDkn696JKCcdQDT5jCL6D169zbb7/drqZfU6pHFJfUK0/v3r2NAo6U1HOD9uF/qXrttdfaxofp06fbdfRleqGr+ehLc81spAbPsAef9gb4DBs2zBSGBLAW7j8FOoRJjYj+fPXqsvbaa4erVXW6VMOQ2/EzzzxjGy2nTJniZiW++oEviStUcaYa3xRIol9ul2pACIMT0gb4hI3japgIezdYvHixbZhwx1qNfmpQrkdaUadKGp5UnkoaE125014nWdhXWi6X51q9hsEn1QjwmTdvXhSQ4srlzlE3rVfXe5VbdtBBB8V6Z/PXreb7So9V2gCf8DmqMrmy++WTixreO3fubGfruaFzUqmRnnfhdfTQQw+Z8Pw77rjjTGHYLVs2/ecHM4XXfFLjrD4jL/WG5lKxgDS3vJqvCurR+aSk+/WkSZPMPvvsY6ddveknP/mJueKKK+w8lb9Hjx5Gvf245AfUFHq5tIEwCn4Jk8q54YYbtunxKesAHwWOqt7RsWPHKAtpr/Gk3kJUtwvrKXrmunPf7bxeAT7VqMMWhi40ffv2dUWLvep8X2+99WyPPv4CP9AgbYCPtqs6t4LKklKpelx4fepeFgbdlgrwUe9T4f1PwRoKePNTYXhI8/Wvf92fZXuxqkcPPspEGJitv4VUFr9XUPf3jMt02jqJ286K1pkrecalvaaVx0oDfLK6/nWv1N+0SWnjjTe2vZy5HmK1TtIzJPxRhe7FhaHY7N+nbrvu3u2meUUAAQQQQAABBBBAAAEEEEAAAQQQQACB6gu0fICPAmxGjhwZSRcbxkc9yfjDbvkBPhpqSV/0++n111+PBdpoWRhMpAbbCRMmRB87/PDDo+EG1GjUs2fPKGBFDWH9+vUzCupRry1K2qd68HGBFX53+G6jjdTgmTbARw131113nSu6/dXq6aefHk3n6U2phqEwn/r1s37Rq+70p02bFvVu4NY74IADYkO/ufm1fNUQGmoA0TAA6nVHDWt+CoPP0gb46NrRr9Rdmjx5cpthj9yyPL2216mShieVr5LGROeQ9jrJwr7Scrk81+o1DLCoRoDPBx98YPr06RMr0h/+8Aez2mqrxeblZaLSY5U2wCd8jipwQb3WtSc10vMuDPDRs8v12OfKHA6HVyrAR5/RfaBLly7u4/b1F7/4hTnnnHOiecV6GIxWqOIb1XfUG6JLClrW/UjJNforQEV1ISX1SqTA51/+8pd2Wg3QKqPrxSSpVw3VkRQ87XrlU9ll4FLaAB8N2+T3LKft+vVBTWdxjSvA2298TxrGUr046pz3U70CfJSHrOuwCr5XvdklBRJo6Fb1BPWZz3zGzg7rEmmH6NK2ZeinYkHUpepxlTyTSwX4aP+q82tIJZeSetcMexfVuvXqwUf7Vh3VH2ZP17iCv/y6ueqH6sHUpbR1Ercd99reOnMlz7gsrulKA3xUjiyuf/We6fdaqb8RBw4caHvgUY+aYQBdUoDPX//6V6Nnhkv6G1mBl/7fvBqWTfdAEgIIIIAAAggggAACCCCAAAIIIIAAAgjUTqDlA3z8xiTHrp52/F8N65f14bBdfoNO0q8t/UYkbVfr6JfqfoPNBRdcEOuRIfyiXkERbn11c9+pUyfjD/ejL2NdcI/2oV+9+0NZaF4jNXimDfDR8Fzq6t8lNSCoV4hVVlnFzbKvarTX0Bb69b9+Ee8PGxJbsYoTpRqGFCSjvLmkIThWXXVVN9lmKIKk4QSilav0Rj3SqMHDpc022yzqUUPzVIZvfvObbrF9nT17ttloo43s+7BRrr1DdKmhS9epS/716Obp2tEvupXkqTx+/vOfd4tr8prWKWx4UqaTGu8qaUx0BU57nWRhX2m5XJ5r9RoG+CgAVMECfkobyKJthYEbSQ3IanxToKiSzl81+OmeX+tU6bFK65L0HE0KsNK95Z133rEmCvTQkJPuHt9Izzv1quN6adIxPfbYY+1wme74JvXMUS7AJwzy1bBYCgBduHCh26xt4PUDfqIFNXgTBqT4dRgXxKzGeX+YUb8epCEhXUCQsnvTTTeZ0aNHRzlXzxIa3tQlBVSqcdovv183K3fOhj3AaLsaQimsq2h+WG9Me43rfHjssce0aZt0H9KQcx06dLDTclJ5dX36qZ4BPlnXYb/3ve+Zhx9+OCrelVdeadQLlEvz5883e+yxh5u0r2l68JGn6uUHHnhgrOcnnYP6O8DVX9wOS9XjKnkmlwvw0dCmYW+gelaozJ9++qnRcEnq/SZMSXWEcJ1qTavepWeVSwpkfO+996KeltRzjHoickF6Wi9tnSRtnbnSZ1zaa7o9AT5pr3+dHwqQ9JPuJ5tsskk0K7y/JQX4aOXwBy66TvyeRdXLnOrXJAQQQAABBBBAAAEEEEAAAQQQQAABBBConUDLB/gkdYOvL/PVsKBGJvWsoC7KXaCNOzRhQEH4ZazWu+qqq2zj0htvvGEuu+yyWA9AWh42XioYQV+chkmBKq5HFAWmKOAjKYVf3mqdRmrwDBvN2jtEl4Zk6tWrV4xGDYL6BbF6Nfjoo4+MGojUkOiSP0yam1eL11INQ/6v4JUXDRGhRifXsKfzSv9c0nBufiOnm1/N17AnBv2yXsN06dpRUjCP8uVSGISUNsDHD3Rz+1DgiRrNNTSFgiP0S3+/gfTMM880gwcPdqvX5DWt0x//+Eez7777xvKq80GBfBqqwzU4VtKY6DaS9jrJwr7Scrk81+o1/NW8ztOTTjrJDifUtWtXm41yQQEur2GwkN/wrt4M/EAFXTdquN1pp53sdS5j/freD0zQ9fWVr3zFbb5mr5UeqyxcwueonsFqFJWlGizVY93w4cNjZVfj7FprrWXnNdLzbtSoUebWW2+NleW73/2uDYxUsJPu6f7x14rlAny0jnrYUkO0AjDVc43f05mWa5hCBQLVIy1ZssRsu+22ibv2h6AKn4/uA3ru+YGjSQ3U6vVJ54uCm/RMCAMk0gT4uF4XVR9U4JTfu4oaxzUkjoaNUkp7jSuffq8n2qZ6dFHwiQKXVC7/+ablSv595l9zavd/1nXYsE6o8stF9TkFP5566qlR4IgrZZoAH1fnfPnll+3xddvUq3ouUd1RPZ+4FJ6n+ltBPTwpVfJMLhfgU8zT7b/Yaz0DfJSnpMAkl1c914YOHeom7WvaOknaOnOlz7i013R7AnyyuP7DerZ6NFV9Rr1f6d6hY+H/bVsswCfsBc4/eElDz/nLeY8AAggggAACCCCAAAIIIIAAAggggAAC1RFo+QAfsepL+TFjxrRLOAzwUUCBflns96hTaoNJve2Ev1x3nw+/EA8bFbReGEDhPttIDZ5hY45rbHFl0Wt4rMLj8Lvf/S7WK5L7rN9TgJunV/0iXj1A1DqFx9BvGAqHOFDeFACgL9J1nilgzE/1KEPYbb/Lj3pDUuOj3zOFlimwRgE2LoUND+3twUfbCXsLcNtOOtbyU8PfBhts4FaryWtap2L3BGXeb1ippDHRL3Da6yStfaXl8vNci/fh/dLfp2s0zSKQReVXr3D65XuYdK76jW5arutKv5j3ez0IP1et6UqPVRYuul4OPvjgNs/RJBOVV0PB+MPBhMcv6d4YBtYoCOQ73/lOtfiKblfDTp199tlFlyctqCTAJ+lzbp7OI/VK5fcI55bV6jV89mm//r1M0zfccIPR8z9Mfi9wWqYAKD1LwlTsfNF6WQT4aDt6ximg1E8KxlQQgK7TtNd4Uu9L/r6Kva9ngE+xe8WK1mGT6kIqd6njm0WAj/YR1jU177zzzjPHH3+83toUnst+Pa6SZ3K5AB/tRL1QKki9WFIwn9/LkdZzz6pin6n2fB0DDQWVlKZPn26HHw6XpamTJJ0nOkcqrTMXO2+VR//elPaabk+ATxbXv85XDV9baSoW4KO8FOvptF7Pz0rLxHoIIIAAAggggAACCCCAAAIIIIAAAgg0qwABPoUjq+FP9ItTfUmclPRFsRpy/F/bh4El+pyGblDjZNg4G26z1Bei4ZAE+mzYSJnU8HDccceZ888/P9xVS/Xg4wqvX9EPGjTITRZ9LXUcin4oowWlGobUiKDeZ+6+++6ye1MjihqJXe8+ZT+Q4Qrh8CjFNq1fvqvXCDUeuJRFgI+2dfHFF8d6ZHLbD1/vuusu21ATzq/FdBon5e/yyy831157bZus+j17VdKYGG4g7XWS1r6ScoV5rvb0b37zm1gDrr8/12iaRSCLtqsexdRjixrkS6Xu3bvbRt5111231GpVXVbJscrKpdLnqHpQUQ9Z/r2vkQJ8dJ/Xc0A9+SUl9V7UuXNno2GtXCoV4KOGaPXqNWPGDLd67FX333vvvTfq9Su2sIYT6gVF9w4/hQGgSUPm6DoIAxm0jYsuusjcfPPN/uZi7zVE0Jw5c6J5WQX4aIPXXHON+eEPfxhtW28ULH7UUUfZeWmv8aShLv2dnXzyyW16+alngI/ylmUd9sMPP7T342L3SF0jb7/9diwgMKsAH/UYpvtzeH3q+tJ+lUrV4yp5JlcS4KP9qM6ve134t4V6/NRwe9ttt51Wi5J7VkUzavxG9zYFhIT5LXYNu+ytaJ0kizpzJc845TPNNd2eAB/tK+31/9prr9m/AYv98EQ9Yj366KPalU3FAny0MOm61vywJ1rNIyGAAAIIIIAAAggggAACCCCAAAIIIIBA9QUI8Pm3sb60VUONvkj3k74APffcc+2XmBomyaWkAB8t0xeqGkYiqZFNjQLqtUE9/RRLCiJSDwN+Cr+s17SGivCThnfZZ599/Fn2fTjkSdIwL2GPBn4DVZsNVnFG+GvTpF6OwuAmBfKMGDGiTa5mzZplh3YJe5LRijqmw4YNazOcV5uNVHGGzh/l0aVbbrnF9O3b103aV51DahB45ZVXYvM1ocZcBaWFn2mzYpVnaCgLnS9hI5h2u+WWW5oBAwbYRjJ/WAst0zHwhzdRI4N6ofJTaKSGYX3OT+otSAE0Oi/C4Wy0noaz0jAebigr/7O1fL+iTsrjsmXLzH333WeHQPMb+xUkddttt9liqAFUw1S4VOnQc2muk7T2lZTLlaeWr2qY1ZAU6lXKb6B09+EFCxbEev0q1igWDtHlD0HkyqMhIvW8UG8u/r60XIGl6p1GQ92tvvrq7iN1ea3kWGXpUuo5qvuK7t/qLSXs0aiRnnc6kBpGS71zqGHbTyqbAlDVeO83wPrPiaQAAt0P1PuNggH8pCBLBYL27NnTn12X90mN1mEQs4IrdJz9VKzOpXUU+KSy+88U3QPlN3Xq1FgAkOp5CsRWKnfOhj2DaMhPBQi5pHwqyCKsZ/iN3mmvcW1r7NixsaHWdG/QeaOAttCp3gE+WddhNXyTrgX1/OLfI+WuoZ4UTOX3aqhe0TbbbDN7iModv3J1TtUpdC36+1U93tXvwzqKf31W8kwOA3x0XF966SV3esVe9TeK6oIavk+Bf3q+qF6zdOnSNte1rrF6PzN07ek56ifVWTVMVKmUpk6Sps5cyTPO5XtFr2kFaiuQyKVKho1Ne/3Pnz/f9jzl9xao80zBgRqqcbfddnPZMRtvvHFiXV4raHhMvxdOzVPwpIZEJCGAAAIIIIAAAggggAACCCCAAAIIIIBA7QUI8AnMP/nkE9vos2TJEvO5z31uhb8k1y+P1Xj0zjvvmLXWWstsuOGGdQ8yCIraEpPyVyONGkG6dOliNt10U7PKKqs0VNl1Ts6bN8/+Ul35V+OV32tFHgqjYA8NH6ZePNTTiH6p3bFjx5plTfvX9aYGMyX1ZKFrLgwsqlmGiuwoCyedyzr+WZ7Haa6TrOyrUa4ih6Hi2br2VL5qn8squ87fv//97/a46tlTzx57ygHV6lgpAEYu//znP83aa69tn8lrrLFGuew13HIFiiioSY34undWcuyTAnzUU5mStqfhzhQcsckmm9g6SMOhrECGdf2oRxc9J1XvylNKe41rmBwX3KHAjjC4LU9lrVZeFKyge7KOb7XvydUqQ3u2qyBTPZtdUkDF5z//eTdpXxX0q0Avl0oFCbl1avGqXkX9oBLt0w98K5eHNHWSLOrMlTzj0l7T5Qz85Wmvfz0T1Duenp8K5Gnv3xAaetAPblTeFLC03377+dnkPQIIIIAAAggggAACCCCAAAIIIIAAAgjUSIAAnxpBsxsEEEAAAQQQQCALgVIBPllsn20ggEB9Ba6++upYb1zqLU5BFuqJS73IPPbYY3aINn8IprCXqVqWQD1oKTBHPQ6GvaFqOLORI0fWMjvsK6WAAgoVWPTMM8+0GQJaPVlpyMe8BdGnLDIfRwABBBBAAAEEEEAAAQQQQAABBBBAoGEECPBpmENFRhFAAAEEEEAAAWMI8OEsQKC5BRQss9NOO1VcSPXeo6CLrl27VvyZLFfs3bt3bDgzf9uzZ8+mF1MfpAHeDx482A6Nl5TVYsNCJ63LPAQQQAABBBBAAAEEEEAAAQQQQAABBBDIXoAAn+xN2SICCCCAAAIIIFA1AQJ8qkbLhhHIjcCcOXPMiBEjjHrHKZV22WUXM378+LoG0SQF+Gg4KA3l1KdPn1LZZ1kOBYoF+KgXKfUURUIAAQQQQAABBBBAAAEEEEAAAQQQQACB+gkQ4FM/e/aMAAIIIIAAAgi0W+BPf/qTueKKK6LPde/e3QYCRDN4gwACTSGwePFiM2XKFPPiiy+al19+2cyfP9+sttpqZrvttrPDdW277bZm3333NR06dKhreV2Az+abb2623HJL06tXL6OhuTp16lTXfLHzFRMYMmSIueeee4yGhtt6663tMT3mmGNMjx49VmyDfAoBBBBAAAEEEEAAAQQQQAABBBBAAAEEMhMgwCczSjaEAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggED2AgT4ZG/KFhFAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQyEyAAJ/MKNkQAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAALZCxDgk70pW0QAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAIDMBAnwyo2RDCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAghkL0CAT/ambBEBBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAgcwECPDJjJINIYAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCQvQABPtmbskUEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBDITIMAnM0o2hAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIBA9gIE+GRvyhYRQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEMhMgACfzCjZEAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAAC2QsQ4JO9KVtEAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQCAzAQJ8MqNkQwgggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIZC9AgE/2pmwRAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAIHMBIoG+DzyyCOZ7YQNIYAAAggggAACCCCAAAIIIIAAAk5gr732cm95RQABBBBAAAEEEEAAAQQQQAABBBBAAIEKBAjwqQCJVRBAAAEEEEAAAQQQQAABBBBAIDsBAnyys2RLCCCAAAIIIIAAAggggAACCCCAAAKtIVA0wKc1ik8pEUAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBDItwABPvk+PuQOAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAoMUFCPBp8ROA4iOAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAgjkW4AAn3wfH3KHAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggg0OICBPi0+AlA8RFAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQTyLUCAT76PD7lDAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQaHEBAnxa/ASg+AgggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAL5FiDAJ9/Hh9whgAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIItLgAAT4tfgJQfAQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAIF8CxDgk+/jQ+4QQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEWlyAAJ8WPwEoPgIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggEC+BQjwyffxIXcIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACLS5AgE+LnwAUHwEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQCDfAgT45Pv4kDsEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQACBFhcgwKfFTwCKjwACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIJBvAQJ88n18yB0CCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIBAiwsQ4NPiJwDFRwABBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEMi3AAE++T4+5A4BBBBAAAEEEEAAAQQQQAABBBBAAAEEEEAAAQQQQAABBBBAAAEEEECgxQUI8GnxE4DiI4AAAggggAACCCCAAAIIIIAAAggggAACCCCAAAIIIIAAAggggAACCORboCYBPh988IGZN2+e+ctf/mLWXHNNs+mmm5quXbuaDh065FuH3CGAQEMILF682Lz55ptmwYIFZoMNNjDdunUza621VkPknUwigEB1BD799FMzf/58W/9Ye+21bb2jS5cu1dlZDrbaauXNAXlNs8Bzrqbc7AwBBP4twL2HUwEBBBBAAAEEEEAAAQQQQAABBBBAAIF8CVQ1wOftt982119/vZk0aVKbUnfv3t0MGTLEHHjggWallVZqs5wZtRGYNm2aufTSS80nn3xittxyS3P77bfXZsd12EszlLUZypDloV+6dKm56aabzLhx49psdtCgQUb/CPRpQ5OLGbrnTJgwwUyZMsXm5+ijj7bPhFxkro6ZwCUb/BkzZpiLL77YLFy4MLbBb3zjG+ass84ym2++eWx+o0+0Wnkb/Xi1J/8859qj1Trr8qxonWNdr5Jy76mXPPtFAAEEEEAAAQQQQAABBBBAAAEEEECgtEDVAnz+/Oc/myOOOKJN41qYnRNOOMGce+65BPmEMFWe/vDDD81FF11k7rzzzmhPCvB54IEHoulmedMMZW2GMmR9Pqm3isGDB5v777+/6Ka33357c+utt5rVV1+96DosqL2Aels644wzzIsvvhjtXM+CkSNHRtOt+AaXbI66gv5Gjx5ddGPqSfDee++1PfoUXamBFrRaeRvo0KTOKs+51IRNuQGeFU15WHNVKO49uTocZAYBBBBAAAEEEEAAAQQQQAABBBBAAIGYQFUCfPSLv/3228+89dZbsZ0VmzjnnHPMwIEDiy1mfsYCc+fOtY3r4fFpxgCfZihrM5Qh41PYbu7HP/6xGT9+vH2vc/fCCy80PXv2tMPxqOeOZ5991i476qijzJgxY6qRBba5AgJTp041w4cPb/PJVg/wwaXNKbFCM55++mkzYMAA+1kF8iiQdY899jAaKvTaa6+NeoxSL4IPPvhgww8V2mrlXaGTooE/xHOugQ9elbLOs6JKsGw2JsC9J8bBBAIIIIAAAggggAACCCCAAAIIIIAAArkSqEqAz8SJE80ll1wSK+gxxxxje/RRUMmPfvQj88orr8SWz5kzx6y//vqxeUxkL/Dzn//c9pjktnzyySebJ554wvak0WwBPs1Q1mYogzvXsnx9//33zW677WYWLVpkN/vb3/7WbLbZZtEu1Ji/yy67RMtnzZrVNL11RIVswDfDhg0zd999d5Tzyy67zJx99tl2upUDfHCJTonUb7797W+bJ5980m5HdQ0NyeWn4447zuh+oXTllVea/v37+4sb7n2rlbfhDlCKDPOcS4HXpB/lWdGkBzZnxeLek7MDQnYQQAABBBBAAAEEEEAAAQQQQAABBBAIBDIP8FGX3tttt13UsK79qTcf/XLepfnz59tf1LtpvQ4dOtScdtpp/izeV0Fg0KBBdhguBVNNmDDB7LrrrsY1EDZbgE8zlLUZylCF09hMnjzZnHfeeXbTClI766yz2uzmZz/7mbngggtKrtPmQ8yoqsAWW2xht//FL37RXHXVVWbTTTc1W221lZ3XygE+uGRz2r366qtm//33j84x9XSx8sorxzb+8ssvmwMOOMDO+8IXvmDuueee2PJGmmi18jbSsckirzznslBsrm3wrGiu45nX0nDvyeuRIV8IIIAAAggggAACCCCAAAIIIIAAAgj8SyDzAJ8XX3yxzS/iZ8yYYbbZZpuY+fnnn29uu+22aJ4afH/1q19F07ypjoACRjSE2rhx40yXLl3sTpo5wKfRy9pKx6s9Z7yCAXVfUXruuedM586d23z8k08+sb34vPvuu6bRG/LbFK5BZ6hxUuf0kCFDzKqrrmqWLVtGgE/hWOKSzQntN0pOmjTJ9OvXL3HDgwcPNtOnT7fLnn/+ebPOOuskrpf3ma1W3rwfj6zzx3Mua9HG3x7PisY/ho1QAu49jXCUyCMCCCCAAAIIIIAAAggggAACCCCAQCsLZB7g89Of/tSMHTs2Mi3WK8zTTz9tBgwYEK2nNy+88ILp1KlTbB4T2Qq89NJLplevXrFeDZo1wKcZytoMZcj2DP7X1nr37m17Cdt8883No48+WnQX6hXGLZflmmuuWXRdFlRfYO7cuaZPnz7Rjgjw+RcFLtEpkeqNAsdcjzyzZ882G220UeL2/HrKLbfcYvr27Zu4Xt5ntlp58348ss4fz7msRRt/ezwrGv8YNkIJuPc0wlEijwgggAACCCCAAAIIIIAAAggggAACrSyQeYDPRRddZG6++ebIdM899zQ33nhjNO3evPnmm0bL/DRz5kzTo0cPfxbvayBw5JFHmjlz5phiwVg1yELNdtEMZW2GMqQ54B9//HHUI5h66FBPHcWS38ZXIrEAAAfUSURBVFPYrFmzTNeuXYutyvw6CKiXpa233truuZWH6ArpcQlFKpt290at/dprr5kOHTokflC9f7khQdWb3WGHHZa4Xt5ntlp58348sswfz7ksNZt3WzwrmvfY1qtk3HvqJc9+EUAAAQQQQAABBBBAAAEEEEAAAQQQqFwg8wCfM844w0ybNi3KwTHHHGMU9BOmxYsX255k/Pl33nmn+fKXv+zP4n0NBFwjIQE+NcDOYBetdLySuDTk1o477mgXFbu/uM/5PXX8+te/NvpVMik/AjROJh8LXJJdys3db7/9bGBPuZ69NCzXoYceajc3cuRIo+CyRkytVt5GPEYrmmeecysq11qf41nRWse7FqXl3lMLZfaBAAIIIIAAAggggAACCCCAAAIIIIBAOoHMA3z8IXGUNQX86F9S2mKLLWKz1ROHeuQg1VaglQJGmqGszVCGNGf4W2+9Fd0nNETN6aefXnRzd999txk2bJhdPnnyZLPzzjsXXZcFtRegcTLZHJdkl3JzFfjnGifvuOOOoqu35x5SdCM5WNBq5c0Bec2y0J5zlOdczQ5L7nbEsyJ3h6ThM8S9p+EPIQVAAAEEEEAAAQQQQAABBBBAAAEEEGgBgcwDfAYPHmymT58e0R177LFm1KhR0bR7s3TpUtOzZ083aV+nTJlidthhh9g8Jqov0EoBI81Q1mYoQ5qz+u233zZf+cpX7CaK3V/c9idOnGguueQSO6lG0D59+rhFvOZAgMbJ5IOAS7JLubl77723eeONN0z37t3Nww8/XHT1uXPnmkMOOcQuP+ecc8zAgQOLrpvnBa1W3jwfi6zzxnMua9Hm3B7PiuY8rvUsFfeeeuqzbwQQQAABBBBAAAEEEEAAAQQQQAABBCoTyDzAR8E8t956a7R3NUDdcMMN0bR74/9C0M174IEHjIaJItVWoJUCRpqhrM1QhjRnuB8cuOeee5obb7yx6OYuvPBCc8stt9jljzzyiOnWrVvRdVlQewEaJ5PNcUl2KTf38MMPN88++6xd7fXXXzcrr7xy4kfuv/9+c8opp9hl48aNM4cddljienmf2WrlzfvxyDJ/POey1GzebfGsaN5jW6+Sce+plzz7RQABBBBAAAEEEEAAAQQQQAABBBBAoHKBzAN8rr32WnP55ZdHOVDAjgJ3wvTMM8+YI444Ijb7ueeeM507d47NY6L6Aq0UMNIMZW2GMqQ9q3v37m0WLVpUtqeOE0880cyaNcvu7oUXXjCdOnVKu2s+n6EAjZPJmLgku5Sb6/cg+NRTT5kNNtgg8SMKChwzZoxdpvcKFGzE1GrlbcRjlCbPPOfS6LXGZ3lWtMZxrnUpuffUWpz9IYAAAggggAACCCCAAAIIIIAAAggg0D6BzAN8FKQT/hpeDexdu3aN5eyyyy4z119/fTRvm222MTNmzIimeVM7gVYKGGmGsjZDGdKe3SeddJKZOXOm3czvf/97s8Yaa7TZ5Keffmr69u1rFi5caHsGSwo0bPMhZtRUgMbJZG5ckl3KzVVvXeq1S2ny5Mlm5513TvzIiBEjzF133WWXKdh4vfXWS1wv7zNbrbx5Px5Z54/nXNaizbc9nhXNd0zzUCLuPXk4CuQBAQQQQAABBBBAAAEEEEAAAQQQQACB4gKZB/gsW7bMfOlLX7K9a7jdHnLIIWb8+PFu0vztb39r0/CmX6KfeeaZ0Tq8qZ1AKwWMNENZm6EMac/um266yYwePdpuZujQoea0005rs8mpU6ea4cOH2/knnHCCGTlyZJt1mFFfARonk/1xSXYpN1fBfgcddJBd7atf/aq5/fbb23xk3rx5Zq+99rLzi/Uw2OZDOZ3RauXN6WGoWrZ4zlWNtmk2zLOiaQ5lrgrCvSdXh4PMIIAAAggggAACCCCAAAIIIIAAAggg0EYg8wAf7eHqq682EyZMiO1MwTsakkuNa1dddZV58sknY8sff/xx89nPfjY2j4naCLRSwEgzlLUZypD2zH7nnXfMTjvtFG3m6aefNl26dImmFy9ebHbffXfz7rvv2nn333+/2WqrraLlvMmHAI2TyccBl2SXcnOXL19uDj74YKPAF6VJkyaZfv362ffuPwUDut4CFSR49NFHu0UN99pq5W24A5QywzznUgK2wMd5VrTAQa5DEbn31AGdXSKAAAIIIIAAAggggAACCCCAAAIIINAOgaoE+CxatMg2qrnG9XL5Of30082QIUPKrcbyKgm0UsBIM5S1GcqQxak8btw4c91119lNbb/99mbs2LGmR48eZsGCBWbUqFHREF4HHHCAueaaa7LYJdvIWIDGyWRQXJJdKpmrIUFPPPFEu+r6669vg43Vm8+SJUvMxIkTbYCxFmrZY489Zjp27FjJZnO7TquVN7cHokoZ4zlXJdgm2SzPiiY5kDksBveeHB4UsoQAAggggAACCCCAAAIIIIAAAggggMC/BaoS4KNtv/rqq+bQQw+NDdWVpN6/f39z+eWXmw4dOiQtZl4NBFopYKQZytoMZcjitF66dKltyH/iiSeKbk5D8EyZMsWss846RddhQf0EaJxMtscl2aXSuVdccYX5yU9+UnL1adOmmV69epVcp1EWtlp5G+W4ZJFPnnNZKDbvNnhWNO+xrXfJuPfU+wiwfwQQQAABBBBAAAEEEEAAAQQQQAABBIoL/D8AAAD//7ODv2EAACF7SURBVO3dC9AVZf0H8AdQSi11hGoyAxqjEm8R5egU2tjFS1OiRVhhKJFp4T1NxNsoXkIQCwNzShAsbSQTZlBJuulkkTFOYU0CFRBqijkV0ICC/v/PMu/re9nzcnjPnnP27H52Rjnn2efsPs/nt7LHc75nt8+r/7+EOi1r164N06ZNC4sXL+62h7322itccMEFYdy4caFfv37d1mtonMDYsWPDY489FoYOHRqWLFnSuB03YU9FmGsR5pBV6Tdt2hRuueWWcMcdd3Tb5Gc+85lwySWXhIEDB3ZbpyEfAtu2bQvvete7ksGMHz8+XH755fkYWJNHwaW2AsS3NXfddVeYOnVq2Lx5c6eNjRgxIlx99dXh4IMP7tTeyk/KNt9WrlVvxu481xu1crzGuaIcdW7WLP3d0yx5+yVAgAABAgQIECBAgAABAgQIECDQs0CfegZ82na9fv36sHr16vD000+HPffcMwwaNCgMGzYs7LHHHm1d/EmAAIFeCzz//PNh1apVYcOGDWHfffdNQiP7779/r7fnhQQItL5ADPf85S9/Cc8880zo379/GDJkSPJ3Q58+fVp/cikzKNt8UwgK3eQ8V+jymhyB3Ar4uye3pTEwAgQIECBAgAABAgQIECBAgACBkgo0JOBTUlvTJkCAAAECBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECBAgQIFCzgIBPzYQ2QIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgQKB+AgI+9bO1ZQIECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQI1Cwj41ExoAwQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgTqJyDgUz9bWyZAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBQs4CAT82ENkCAAAECBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECBAgQIECgfgICPvWztWUCBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECNQsI+NRMaAMECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIE6icg4FM/W1smQIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgULOAgE/NhDZAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAoH4CAj71s7VlAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAjULCPjUTGgDBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECBOonIOBTP1tbJkCAAAECBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECBAgQIFCzgIBPzYQ2QIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgQKB+AgI+9bO1ZQIECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQI1Cwj41ExoAwQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgTqJyDgUz9bWyZAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBQs4CAT82ENkCAAAECBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECBAgQIECgfgIVAz7/+9//6rdXWyZAgAABAgQIECBAgAABAgRKK7DnnnuWdu4mToAAAQIECBAgQIAAAQIECBAgQKA3AgI+vVHzGgIECBAgQIAAAQIECBAgQKDXAgI+vabzQgIECBAgQIAAAQIECBAgQIAAgZIKCPiUtPCmTYAAAQIECBAgQIAAAQIEmiUg4NMsefslQIAAAQIECBAgQIAAAQIECBBoVQEBn1atnHETIECAAAECBAgQIECAAIEWFRDwadHCGTYBAgQIECBAgAABAgQIECBAgEDTBAR8mkZvxwQIECBAgAABAgQIECBAoJwCAj7lrLtZEyBAgAABAgQIECBAgAABAgQI9F5AwKf3dl5JgAABAgQIECBAgAABAgQI9EJAwKcXaF5CgAABAgQIECBAgAABAgQIECBQagEBn1KX3+QJECBAgAABAgQIECBAgEDjBQR8Gm9ujwQIECBAgAABAgQIECBAgAABAq0tIODT2vUzegIECBAgQIAAAQIECBAg0HICAj4tVzIDJkCAAAECBAgQIECAAAECBAgQaLKAgE+TC2D3BAgQIECAAAECBAgQIECgbAICPmWruPkSIECAAAECBAgQIECAAAECBAjUKiDgU6ug1xMgQIAAAQIECBAgQIAAAQK7JCDgs0tcOhMgQIAAAQIECBAgQIAAAQIECBAIAj4OAgIECBAgQIAAAQIECBAgQKChAgI+DeW2MwIECBAgQIAAAQIECBAgQIAAgQIICPgUoIimQIAAAQIECBAgQIAAAQIEWklAwKeVqmWsBAgQIECAAAECBAgQIECAAAECeRAQ8MlDFYyBAAECBAgQIECAAAECBAiUSEDAp0TFNlUCBAgQIECAAAECBAgQIECAAIFMBAR8MmG0EQIECBAgQIAAAQIECBAgQKBaAQGfaqX0I0CAAAECBAgQIECAAAECBAgQILBDQMDHkUCAAAECBAgQIECAAAECBAg0VEDAp6HcdkaAAAECBAgQIECAAAECBAgQIFAAAQGfAhTRFAgQIECAAAECBAgQIECAQCsJCPi0UrWMlQABAgQIECBAgAABAgQIECBAIA8CAj55qIIxECBAgAABAgQIECBAgACBEgkI+JSo2KZKgAABAgQIECBAgAABAgQIECCQiYCATyaMNkKAAAECBAgQIECAAAECBAhUKyDgU62UfgQIECBAgAABAgQIECBAgAABAgR2CAj4OBIIECBAgAABAgQIECBAgACBhgoI+DSU284IECBAgAABAgQIECBAgAABAgQKICDgU4AimkK6wPbt28P69evDunXrwhve8IYwaNCgMGDAgPTOOW3dsmVLMv5nn302DBw4MAwePDiZS06Ha1gECBAgQIBAnQTie4KnnnoqvPTSS2HYsGFhr732qtOebDaPAv/5z3/CmjVrwr///e9wwAEHhLe//e2hf//+eRyqMRGoWkDAp2oqHQkQIECAAAECBAgQIECAAAECBAgkAg0P+MQvJq688srwzDPPtJdg5MiR4frrr29/7kHjBB544IEwbdq08PLLL4cDDzwwzJ07t3E7r+OelixZEr75zW+G5557rtNejjvuuHDBBRckX4p0WpGzJ1u3bg3z588PM2bM6DayCRMmhC9/+cuCPt1kdjTEY/k73/lO+PGPf5w0jBkzJkycOLFCb83NElCndHku6S5ZtBb1fFfJpmzzreRQhPaf/vSnYd68eeGJJ57oNJ0Y8vnKV74SPvrRj3Zq96RYAhs2bAgzZ85sf1/TNrv99tsvnHfeeeGUU04Jffv2bWv2J4G6CmR9bhHwqWu5bJwAAQIECBAgQIAAAQIECBAgQKCAAg0L+MSrqdx5551h+vTp3Rg//OEPJ1/Id1uhoW4CmzZtCjfccEO4//772/cRAz6LFi1qf96qD+KXYDHcU2mJv3hfsGBBckWfSn2a2R7/W7nooovCww8/XHEYw4cPD9/73vfC61//+op9yrgiXq3p4osvDk8++WT79E877bRw6aWXtj/3oPkC6pReAy7pLrW2Fvl8l2ZTtvmmGRSp7e677w5TpkzpcUqXX355+NznPtdjHytbU+DFF19MahuvSFlpOf3005P3PpXWayeQhUC9zi0CPllUxzYIECBAgAABAgQIECBAgAABAgTKJNCQgE/8UPqyyy4Ly5cvT7UV8EllqVvjihUrwte//vXk9lUdd1KEgM/vf//7MG7cuGRaMcgTv/T60Ic+FOKH0jEQ03ZVlyFDhiRhpn79+nUkyMXj7373u+Hb3/52MpZYkziH97znPWHt2rVJcKntF/yjR48OV199dS7GnIdBLFy4MPl7putYBHy6ijT3uTql+3NJd6m1tcjnuzSbss03zaBIbQ8++GDyfi3OKV6tJV4B87DDDgvxKn+//vWvOwV/5syZE4444ogiTd9c/l9g/PjxYdmyZYnFsccem1yx501velPy/1RXXXVViAGguNx0003hxBNPTB77F4GsBep5bhHwybpatkeAAAECBAgQIECAAAECBAgQIFB0gboHfP72t7+FT37ykz06Cvj0yJPpynvvvbdTKORLX/pS8sVBvOJJEQI+8VfMjz/+eGJ28803h3hLro7LmWeemXwpFttuvPHGnR6bHV/biMf//e9/k1ttbN68OdldvC3H2972tvZdb9y4MXzkIx8Jbevjl3+DBg1qX1/WB5MmTep09alrr702XHHFFQmHgE9+jgp1Sq8Fl3SXWluLfr7r6lO2+XadfxGfx9svLV26NMTA8n333RcOOOCATtP82c9+Fs4999ykLQZB4tX/LMUR+N3vfhfOOOOMZELxPfpPfvKT0DGYHgPfY8eOTdbHYyPeOqnj+uJImEkzBep9bhHwaWZ17ZsAAQIECBAgQIAAAQIECBAgQKAVBeoe8Ol4RZUINGzYsOSfeIuktkXAp02i/n+2fVkUfwk+derUcNRRR4W2UEyrB3xWrVoVRo0alSAecsghId7Wom/fvp1Qn3rqqXDKKackbfFYjB9a52n50Y9+FK655ppkSDF8deGFF3YbXsfbdVTq0+1FBW84+OCDkxnGusdfse+///7h8MMPT9oEfPJTfHVKrwWXdJdaW4t8vkuzKdt80wyK1LZt27b289iYMWOSq/d0nd8rr7wSjjzyyCT0G69MuHjx4q5dPG9hgRjYeuihh5IZzJo1KxxzzDHdZnP++ee339L1tttuCyNHjuzWRwOBWgTqfW4R8KmlOl5LgAABAgQIECBAgAABAgQIECBQRoGGBny+9rWvhXgFlRhQiFdPaVsEfNok6v9n/JA23trhuuuuCwMGDEh2WJSAzz333BPilVvi0tOXHB2/MPnNb34T9t577+Q1efhXDPQsWbIkGcpjjz0W9tlnn27Devnll0O8TUO8LUMeQ0rdBtyAhhiQmDBhQpg4cWLYfffdQ8cvRgV8GlCAKnehTulQXNJdam0t8vkuzaZs800zKFLb888/H2bOnBliiOekk06qePutGP6JV2GMwe1HH320SASlnkvH8FYML8cAeNqyevXq5PiI64S+04S01SpQ73OLgE+tFfJ6AgQIECBAgAABAgQIECBAgACBsgk0JOAzefLk5Koahx12WOI7f/58AZ8mHWnxS6AYCul4ZZuiBHwuueSS9l+v/+IXvwhvfvObU5XvuOOOMH369GTd7bffHj74wQ+m9mtG4xFHHJH8Ej/eaqEt6JM2jrPOOqv9i7x4C4d4+44yLytWrAiHHnpoO4GATztFrh6oU3o5uKS71Npa5PNdmk3Z5ptmULa2LVu2hBEjRiTTFpYvVvU73uL41FNPbb/taNdZbt++PbT9/1VPQaCur/OcQLUC9T63CPhUWwn9CBAgQIAAAQIECBAgQIAAAQIECOwQqHvAZ+PGjWG33XYLe+yxR7u5gE87RS4efPGLXwzLly8PrX6LrrZ5RNQ//vGPoV+/fqm+MTjTduureCWjttt6pXZuYONLL70Uhg8fnuwx3mIhXoWo0hKvVBSvWBSXBx98MAwaNKhS11K2x6scvfe9703m7go++T0E1Cm9NlzSXbJobTtPtPr5rlqLss23Wpei9Js9e3a49dZbk+nE9wVttyAtyvzKPI8Y3j7jjDMSgnjlyfHjx1fkOO6448L69etdxamikBVZC2R5bhHwybo6tkeAAAECBAgQIECAAAECBAgQIFB0gboHfNIABXzSVJrXluWHtM2bRQif+tSnwl//+tews6vf/OEPfwif//znk6HGq/6MGzeumcNu33e85VYM9sSlp19rx/Udr0IUb9sQf7VteU1AQOI1izw/Uqf06nBJd8mitSjnu2otyjbfal2K0O+JJ54IY8eOTaYSr8wYQ7+Vgs1FmG/Z5rB06dIQb40Ul2nTpoUTTjihIkHHqzr+6U9/qtjPCgJZCWR5bhHwyaoqtkOAAAECBAgQIECAAAECBAgQIFAWAQGfslS6h3lm+SFtD7up+6oYjokhmXi7innz5lXcX/yVc/y1c1wmTpwYzj777Ip9G7niH//4Rzj++OOTXe5sXIsWLQqTJk1K+s6ZMyfEW3tZXhMQkHjNIs+P1Cm9OlzSXbJoLcr5rlqLss23WpdW77d27dowevTo5JaecS4LFiwIBx10UKtPy/g7CCxcuDBcdtllScvO3uddccUV4b777kv6xuBX//79O2zJQwLZC2R5bhHwyb4+tkiAAAECBAgQIECAAAECBAgQIFBsAQGfYte3qtll+SFtVTusU6dPfOITYc2aNWHIkCFh8eLFFfeyYsWK5Ao5scPFF18cTj/99Ip9G7nihRdeCMccc0yyy3iFocmTJ1fc/dy5c8NNN92UrI+/2j/00EMr9i3jCgGJ1qi6OqXXiUu6SxatRTnfVWtRtvlW69LK/eJ7hS984QvJLZniPGbNmtX+3qGV52XsnQUefvjhcP755yeNM2bMCB//+Mc7d+jwLAbVH3nkkaTFFXw6wHhYN4Eszy0CPnUrkw0TIECAAAECBAgQIECAAAECBAgUVEDAp6CF3ZVpZfkh7a7sN+u+8VYV8ZfLcYkhnr59+6buouOXJlOmTAknn3xyar9GN27dujW8733vS3Z79NFHh9mzZ1ccwvXXXx9+8IMfJOsfeOCBMHjw4Ip9y7hCQKI1qq5O6XXiku6SRWtRznfVWpRtvtW6tGq/jRs3hljTlStXJlO45pprwqc//elWnY5x9yCwbNmyMH78+KTHzm4n2xZw32+//cKjjz7aw1atIpCNQJbnFgGfbGpiKwQIECBAgAABAgQIECBAgAABAuUREPApT60rzjTLD2kr7qQBKy666KLw0EMPJXv61a9+FQYOHJi61zvvvDNMnTo1WZe3X77HW21t3rx5p1ch6vhr7d/+9rfhjW98Y+pcy9ooINEalVen9DpxSXfJorUo57tqLco232pdWrHfli1bwplnnhmWL1+eDP+8885LnrfiXIx55wKrVq0Ko0aNSjrGKza13a6r6ytfeeWV9qs4Dhs2LNx7771du3hOIHOBLM8tAj6Zl8cGCRAgQIAAAQIECBAgQIAAAQIECi4g4FPwAlczvSw/pK1mf/XqE69oE69sE5c5c+aEGJZJW+KXJAsXLkxWxV86x18852U555xzws9//vNkOI8//nhI+9B7+/bt4WMf+1h47rnnwoEHHhgWLVqUl+HnZhwCErkpRY8DUad0Hi7pLlm0FuV8V61F2eZbrUur9du2bVuIgZ5f/vKXydDPOuusEN8vWIorEN/rHXXUUUnoe8SIEWHevHmpk123bl044YQTknXxv/dvfOMbqf00EshSIMtzS9r/62Q5VtsiQIAAAQIECBAgQIAAAQIECBAgUDQBAZ+iVbQX88nyQ9pe7D6zl/z5z38Oo0ePTrb3gQ98IMydO7fbtteuXRtOPPHEpD2P4Zj58+eHG2+8MRlf/PIufonXdYnhpLZfcvsyp6vOjucCEukueWtVp/SKcEl3yaK1KOe7ai3KNt9qXVqpX7xCy+TJk9vDvPG2TfGKhZbiC3QMfcerT77//e/vNul4bNx///1J+8yZM8Oxxx7brY8GAlkLZHluEfDJujq2R4AAAQIECBAgQIAAAQIECBAgUHQBAZ+iV7iK+WX5IW0Vu6tbl1dffTV89rOfDTHoE5fbbrstjBw5stP+LrzwwrBkyZKk7corrwxjxozptL7ZT/71r3+Fo48+un0YjzzySBgwYED783iLjnj1nhdffDFpi1/qDB06tH29BzsEBCRa40hQp/Q6cUl3yaK1KOe7ai3KNt9qXVqpXwz9xvBvXOKtmiZNmhT69OnTSlMw1l4KxNvNfvWrX01efcghh4R77rmnU+1XrlwZTj755GR9vBrl0qVLw+te97pe7s3LCFQvkOW5RcCnenc9CRAgQIAAAQIECBAgQIAAAQIECEQBAR/HQcjyQ9pmc3b8MiR+2TF16tTkVl0xGBOv6DNr1qxkiHn+IuTmm28O3//+95NxDh8+PFx77bXhHe94R/jnP/8ZrrvuuvZbeB1//PFh+vTpzSbP5f4FJHJZlm6DUqduJEkDl3SXLFqLdL6rxqNs863GpJX6zJ49O9x6663tQ463Ie3bt2/7864P4i2dBg4c2LXZ8xYViMH1U089NTz55JPJDGIo/dxzzw377LNP0nbppZeGNWvWJOuuuuqqJOTeolM17BYTyPLcIuDTYsU3XAIECBAgQIAAAQIECBAgQIAAgaYLCPg0vQTNH0CWH9I2fzYhfOtb3wq33357j0NZsGBBOOigg3rs06yVW7duDWeffXZYtmxZxSHE24vdddddYe+9967Yp8wrBCRao/rqlF4nLukuWbQW7Xy3M5OyzXdnHq20fuPGjeHII4/cpSHPmzcvjBgxYpdeo3O+BZ5++ukk5NN25ca00Z500klhypQpPYa/0l6njUBvBbI8twj49LYKXkeAAAECBAgQIECAAAECBAgQIFBWgaYEfO6+++7kg+g29HjLoVtuuaXtqT8bLDB+/PgkTBJDI4sWLWrw3rPfXfzFc7yNwYwZM8LmzZs77SBeEWfy5Mm5Dfe0DXbTpk3Jr/bbbsvR1h7/HDVqVIi3Gut4666O6z0OYdu2beHwww9PKE477bQQf+VuyZ+AOqXXhEu6SxatRTvf7cykbPPdmUcrre9NwOeHP/xh+7mvleZqrD0LrFu3Ltxwww0h3ra163LOOeeECRMmhN12263rKs8J1E0gy3OLgE/dymTDBAgQIECAAAECBAgQIECAAAECBRVoSsCnoJamlTOBGO5ZuXJlePbZZ8Puu+8eBg8eHIYOHRr69OmTs5FWHs6GDRvC6tWrwwsvvBD23Xff8M53vjO89a1vrfwCawgQIECAAAECBAon8Pe//z25JVd8f/uWt7wlvPvd73Ylx8JVuXwTEvApX83NmAABAgQIECBAgAABAgQIECBAoDYBAZ/a/LyaAAECBAgQIECAAAECBAgQ2EUBAZ9dBNOdAAECBAgQIECAAAECBAgQIECg9AICPqU/BAAQIECAAAECBAgQIECAAIHGCgj4NNbb3ggQIECAAAECBAgQIECAAAECBFpfQMCn9WtoBgQIECBAgAABAgQIECBAoKUEBHxaqlwGS4AAAQIECBAgQIAAAQIECBAgkAMBAZ8cFMEQCBAgQIAAAQIECBAgQIBAmQQEfMpUbXMlQIAAAQIECBAgQIAAAQIECBDIQkDAJwtF2yBAgAABAgQIECBAgAABAgSqFhDwqZpKRwIECBAgQIAAAQIECBAgQIAAAQKJgICPA4EAAQIECBAgQIAAAQIECBBoqICAT0O57YwAAQIECBAgQIAAAQIECBAgQKAAAgI+BSiiKRAgQIAAAQIECBAgQIAAgVYSEPBppWoZKwECBAgQIECAAAECBAgQIECAQB4EBHzyUAVjIECAAAECBAgQIECAAAECJRIQ8ClRsU2VAAECBAgQIECAAAECBAgQIEAgEwEBn0wYbYQAAQIECBAgQIAAAQIECBCoVkDAp1op/QgQIECAAAECBAgQIECAAAECBAjsEBDwcSQQIECAAAECBAgQIECAAAECDRUQ8Gkot50RIECAAAECBAgQIECAAAECBAgUQEDApwBFNAUCBAgQIECAAAECBAgQINBKAgI+rVQtYyVAgAABAgQIECBAgAABAgQIEMiDgIBPHqpgDAQIECBAgAABAgQIECBAoEQCAj4lKrapEiBAgAABAgQIECBAgAABAgQIZCIg4JMJo40QIECAAAECBAgQIECAAAEC1QoI+FQrpR8BAgQIECBAgAABAgQIECBAgACBHQICPo4EAgQIECBAgAABAgQIECBAoKECAj4N5bYzAgQIECBAgAABAgQIECBAgACBAggI+BSgiKZAgAABAgQIECBAgAABAgRaSUDAp5WqZawECBAgQIAAAQIECBAgQIAAAQJ5EBDwyUMVjIEAAQIECBAgQIAAAQIECJRIQMCnRMU2VQIECBAgQIAAAQIECBAgQIAAgUwEBHwyYbQRAgQIECBAgAABAgQIECBAoFoBAZ9qpfQjQIAAAQIECBAgQIAAAQIECBAgsENAwMeRQIAAAQIECBAgQIAAAQIECDRUQMCnodx2RoAAAQIECBAgQIAAAQIECBAgUAABAZ8CFNEUCBAgQIAAAQIECBAgQIBAKwkI+LRStYyVAAECBAgQIECAAAECBAgQIEAgDwIVAz55GJwxECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECi7gIBP2Y8A8ydAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIEMi1gIBPrstjcAQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAmUXEPAp+xFg/gQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABArkWEPDJdXkMjgABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAoOwCAj5lPwLMnwABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAINcCAj65Lo/BESBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIlF1AwKfsR4D5EyBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQI5FpAwCfX5TE4AgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgACBsgsI+JT9CDB/AgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgACBXAsI+OS6PAZHgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBQdgEBn7IfAeZPgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECCQawEBn1yXx+AIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgTKLiDgU/YjwPwJECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgRyLSDgk+vyGBwBAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgEDZBQR8yn4EmD8BAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgECuBQR8cl0egyNAgAABAgQIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIECi7wP8BeoDCrR44KZAAAAAASUVORK5CYII=" } }, "cell_type": "markdown", "id": "b763fdee-675a-4aa4-a9aa-4e910f1b7886", "metadata": {}, "source": [ "We were able to reproduce the same output using the function. \n", "\n", "A Document-Term Matrix (DTM) is a matrix that represents the frequency of terms that occur in a collection of documents. Let's look at two sentences to understand what DTM is. \n", "\n", "Let's say that we have the following two sentences:\n", "```\n", "sentence_1 = 'He is walking down the street.'\n", "\n", "sentence_2 = 'She walked up then walked down the street yesterday.'\n", "```\n", "The DTM of the above two sentences will be:\n", "\n", "![image.png](attachment:938d222b-5a6d-4e1d-960f-9bcf1a130b85.png)\n", "\n", "In this DTM, numbers indicate how many times that particular term was observed in the given sentence. For example, \"down\" is present once in both sentences, while \"walked\" appears twice but only in the second sentence. \n", "\n", "Now let's look at how we can implement a DTM concept, using `sklearn`'s `CountVectorizer`. Note that the DTM that is initially created using `sklearn` is in the form of a sparse matrix/array (i.e. most of the entries are zero). This is done for efficiency reasons but we will need to convert the sparse array to a dense array (i.e. most of the values are non-zero). Since understanding the differentiation between sparse and dense arrays are not the intention of this post, we won't go deeper into that topic.\n", "\n", "**Question 5:**\n", "\n", "Define a function named \"create_dtm\" that creates a Document-Term Matrix in the form of a dataframe for a given series of strings. Then test it on the top five rows of our data set.\n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 53, "id": "97a28516-7328-44ed-92ef-c473b0026254", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
aboutactingaimlessalmostandanglesanythingartinessasattempting...tryingverywalkedwaswhenwhitewhowhomwithyoung
01010000000...0300000001
10000000000...0011001100
20101310111...0001010010
30000001000...0100000000
40000000000...1001100000
\n", "

5 rows × 68 columns

\n", "
" ], "text/plain": [ " about acting aimless almost and angles anything artiness as \\\n", "0 1 0 1 0 0 0 0 0 0 \n", "1 0 0 0 0 0 0 0 0 0 \n", "2 0 1 0 1 3 1 0 1 1 \n", "3 0 0 0 0 0 0 1 0 0 \n", "4 0 0 0 0 0 0 0 0 0 \n", "\n", " attempting ... trying very walked was when white who whom with \\\n", "0 0 ... 0 3 0 0 0 0 0 0 0 \n", "1 0 ... 0 0 1 1 0 0 1 1 0 \n", "2 1 ... 0 0 0 1 0 1 0 0 1 \n", "3 0 ... 0 1 0 0 0 0 0 0 0 \n", "4 0 ... 1 0 0 1 1 0 0 0 0 \n", "\n", " young \n", "0 1 \n", "1 0 \n", "2 0 \n", "3 0 \n", "4 0 \n", "\n", "[5 rows x 68 columns]" ] }, "execution_count": 53, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Import the package\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "\n", "def create_dtm(series):\n", " # Create an instance of the class\n", " cv = CountVectorizer()\n", " \n", " # Create a document term matrix from the provided series\n", " dtm = cv.fit_transform(series)\n", " \n", " # Convert the sparse array to a dense array\n", " dtm = dtm.todense()\n", " \n", " # Get column names\n", " features = cv.get_feature_names_out()\n", " \n", " # Create a dataframe\n", " dtm_df = pd.DataFrame(dtm, columns = features)\n", " \n", " # Return the dataframe\n", " return dtm_df\n", "\n", "# Try the function on the top 5 rows of the df['text']\n", "create_dtm(df.text.head())" ] }, { "cell_type": "markdown", "id": "ce1f7e22-b726-4159-8487-be65fa688f2f", "metadata": {}, "source": [ "## Feature Importance\n", "\n", "Now we want to think about sentiment analysis as a machine learning model. In such a machine learning model, we would like the model to take in the textual input and make predictions about the sentiment of each textual entry. In other words, the textual input is the independent variable and the sentiment is the dependent variable. We also learned that we can break down the text into smaller pieces named tokens, therefore, we can think of each of the tokens within the textual input as \"features\" that help in predicting the sentiment as the output of the machine learning model. To summarize, we started with a machine learning model that took in large textual data and predicted sentiments but now we have converted our task into a model that takes in multiple \"tokens\" (instead of a large body of text) and predicts the sentiment based on the given tokens. Then the next logical step would be to make an attempt at quantifying which of the tokens (i.e. features) are more important in predicting the sentiment. This task is called feature importance. \n", "\n", "Luckily for us, feature importance can be easily implemented in `sklearn`. Let's look at an example together. \n", "\n", "**Question 6:**\n", "\n", "Define a function named \"top_n_tokens\" that accepts three arguemnts: (1) \"text\", which is the textual input in the format of a data frame column, (2) \"sentiment\", which is the label of the sentiment for the given text in the format of a data frame column, and (3) \"n\", which is a positive number. The function will return the top \"n\" most important tokens (i.e. features) to predict the \"sentiment\" of the \"text\". Please use `LogisticRegression` from `sklearn.linear_model` with the following parameters: `solver = 'lbfgs'`, `max_iter = 2500`, and `random_state = 1234`. Finally, use the function to return the top 10 most important tokens in the \"text\" column of the dataframe.\n", "\n", "***Note:** Since the goal of this post is to explore sentiment analysis, we assume the reader is familiar with Logistic Regression. If you would like to take a deeper look at Logistic Regression, check out [this post](https://medium.com/@fmnobar/logistic-regression-overview-through-11-practice-questions-practice-notebook-64e94cb8d09d).*\n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 54, "id": "d6448d0a-de63-4e45-ac1e-13b9d2cbd71c", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
TokensCoefficients
1567liked1.286747
2997wonderful1.242158
1104funny1.112821
1182great1.068772
2949well1.043139
246beautiful1.042833
0101.035405
344brilliant1.014080
908excellent1.009914
2203right0.985806
\n", "
" ], "text/plain": [ " Tokens Coefficients\n", "1567 liked 1.286747\n", "2997 wonderful 1.242158\n", "1104 funny 1.112821\n", "1182 great 1.068772\n", "2949 well 1.043139\n", "246 beautiful 1.042833\n", "0 10 1.035405\n", "344 brilliant 1.014080\n", "908 excellent 1.009914\n", "2203 right 0.985806" ] }, "execution_count": 54, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Import logistic regression\n", "from sklearn.linear_model import LogisticRegression\n", "\n", "def top_n_tokens(text, sentiment, n):\n", " # Create an instance of the class\n", " lgr = LogisticRegression(solver = 'lbfgs', max_iter = 2500, random_state = 1234)\n", " cv = CountVectorizer()\n", " \n", " # create the DTM\n", " dtm = cv.fit_transform(text)\n", " \n", " # Fit the logistic regression model\n", " lgr.fit(dtm, sentiment)\n", " \n", " # Get the coefficients\n", " coefs = lgr.coef_[0]\n", " \n", " # Create the features / column names\n", " features = cv.get_feature_names_out()\n", " \n", " # create the dataframe\n", " df = pd.DataFrame({'Tokens' : features, 'Coefficients' : coefs})\n", " \n", " # Return the largest n\n", " return df.nlargest(n, 'Coefficients')\n", "\n", "# Test it on the df['text']\n", "top_n_tokens(df.text, df.label, 10)" ] }, { "cell_type": "markdown", "id": "37184aba-f899-4fc6-9061-fb66f27dcfed", "metadata": {}, "source": [ "Results are quite interesting. We were looking for the most important features and as we know label 1 indicated a positive sentiment in the dataset. In other words, the most important features (i.e. the ones with the highest coefficients) will be the ones that indicate a strong positive sentiment. This comes across in the results, which all sound quite positive.\n", "\n", "In order to validate this hypothesis, let's look at the 10 smallest coefficients (i.e. the least important features). We expect those to convey a strong negative sentiment." ] }, { "cell_type": "code", "execution_count": 55, "id": "d7f97825-06ad-49d1-8c3d-0471d3b6fd78", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
TokensCoefficients
222bad-1.872751
211awful-1.334554
2530stupid-1.175416
441cheap-1.139512
1802no-1.137234
893even-1.091436
3017would-1.047931
3012worst-1.039231
2923waste-1.038206
1819nothing-0.973472
\n", "
" ], "text/plain": [ " Tokens Coefficients\n", "222 bad -1.872751\n", "211 awful -1.334554\n", "2530 stupid -1.175416\n", "441 cheap -1.139512\n", "1802 no -1.137234\n", "893 even -1.091436\n", "3017 would -1.047931\n", "3012 worst -1.039231\n", "2923 waste -1.038206\n", "1819 nothing -0.973472" ] }, "execution_count": 55, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Import logistic regression\n", "from sklearn.linear_model import LogisticRegression\n", "\n", "def bottom_n_tokens(text, sentiment, n):\n", " # Create an instance of the class\n", " lgr = LogisticRegression(solver = 'lbfgs', max_iter = 2500, random_state = 1234)\n", " cv = CountVectorizer()\n", " \n", " # create the DTM\n", " dtm = cv.fit_transform(text)\n", " \n", " # Fit the logistic regression model\n", " lgr.fit(dtm, sentiment)\n", " \n", " # Get the coefficients\n", " coefs = lgr.coef_[0]\n", " \n", " # Create the features / column names\n", " features = cv.get_feature_names_out()\n", " \n", " # create the dataframe\n", " df = pd.DataFrame({'Tokens' : features, 'Coefficients' : coefs})\n", " \n", " # Return the largest n\n", " return df.nsmallest(n, 'Coefficients')\n", "\n", "# Test it on the df['text']\n", "bottom_n_tokens(df.text, df.label, 10)" ] }, { "cell_type": "markdown", "id": "c706feaa-404b-434d-af7a-05e422071d63", "metadata": {}, "source": [ "As expected, these words convey a strong negative sentiment. \n", "\n", "In the previous example, we trained a logistic regression model on the existing labeled data. But what if we do not have labeled data and would like to determine the sentiment of a given data set? In such cases, we can leverage pre-trained models, such as TextBlob, which we will discuss next. \n", "\n", "## Pre-Trained Models - TextBlob\n", "\n", "TextBlob is a library for processing textual data and one of its functions returns the sentiment of a given data in the format of a named tuple as follows: \"(polarity, subjectivity)\". The polarity score is a float within the range of [-1.0, 1.0] that aims at differentiating whether the text is positive or negative. The subjectivity is a float within the range [0.0, 1.0] where 0.0 is very objective and 1.0 is very subjective. For example, a fact is expected to be objective and one's opinion is expected to be subjective. Polarity and subjectivity detection are two of the most common tasks within sentiment analysis, which we will explore in the next question.\n", "\n", "**Question 7:**\n", "\n", "Define a function named \"polarity_subjectivity\" that accepts two arguments. The function applies \"TextBlob\" to the provided \"text\" (defaulting to \"sample\") and if `print_results = True`, prints polarity and subjectivity of the \"text\" using \"TextBlob\", otherwise returns a tuple of float values with the first value being polarity and the second value being subjectivity, such as \"(polarity, subjectivty)\". Returning the tuple should be the default for the function (i.e. set `print_results = False`). Lastly, use the function on our sample and print the results. \n", "\n", "***Hint:** If you need to install TextBlob you can do so using the following command: `!pip install textblob`*\n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 56, "id": "cb692961-ff9c-4133-bec1-5958fb4c8bf7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Polarity is 0.18 and subjectivity is 0.4.\n" ] } ], "source": [ "# Import TextBlob\n", "from textblob import TextBlob\n", "\n", "def polarity_subjectivity(text = sample, print_results = False):\n", " # Create an instance of TextBlob\n", " tb = TextBlob(text)\n", " \n", " # If the condition is met, print the results, otherwise, return the tuple\n", " if print_results:\n", " print(f\"Polarity is {round(tb.sentiment[0], 2)} and subjectivity is {round(tb.sentiment[1], 2)}.\")\n", " else:\n", " return(tb.sentiment[0], tb.sentiment[1])\n", " \n", "# Test the function on our sample\n", "polarity_subjectivity(sample, print_results = True)" ] }, { "cell_type": "markdown", "id": "b66d5751-d1ae-4fa7-a40c-5cc8c7654e17", "metadata": {}, "source": [ "Let's look at the sample and try to interpret these values. " ] }, { "cell_type": "code", "execution_count": 57, "id": "0d0ac7c0-425b-40e8-86a8-11f2f33c5470", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'A very, very, very slow-moving, aimless movie about a distressed, drifting young man. '" ] }, "execution_count": 57, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sample" ] }, { "cell_type": "markdown", "id": "7d2e3f06-47af-4ec6-a307-b7ee192ea135", "metadata": {}, "source": [ "Interpreting these results are more meaningful in comparison to other strings but in the absence of such a comparison and purely based on the numbers, let's try to intrepret the reuslts. The results indicate that our sample has a neutral to positive polarity (remember polarity ranges from -1 to 1, therefore 0.18 would indicate neutral to positive) and is relatively subjective, which makes intuitive sense since this is someone's review describing their subjective experience about a movie. \n", "\n", "**Question 8:**\n", "\n", "First define a function named \"token_count\" that accepts a string and using `nltk`'s word tokenizer, returns an integer number of tokens in the given string. Then define a second function named \"series_tokens\" that accepts a Pandas Series object as an argument and applies the previously-defined \"token_count\" function to the given Series, returning the integer number of tokens for each row of the given Series. Lastly, use the second function on the top 10 rows of our dataframe and return the results. \n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 58, "id": "addc9348-a2d1-4872-8768-591faba0312c", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 18\n", "1 21\n", "2 33\n", "3 9\n", "4 22\n", "5 27\n", "6 4\n", "7 17\n", "8 4\n", "9 11\n", "Name: text, dtype: int64" ] }, "execution_count": 58, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Import libraries\n", "from nltk import word_tokenize\n", "\n", "# Define the first function that counts the number of tokens in a given string\n", "def token_count(string):\n", " return len(word_tokenize(string))\n", "\n", "# Define the second function that applies the token_count function to a given Pandas Series\n", "def series_tokens(series):\n", " return series.apply(token_count)\n", "\n", "# Apply the function to the top 10 rows of the dataframe\n", "series_tokens(df.text.head(10))" ] }, { "cell_type": "markdown", "id": "60b05e22-ff26-461d-9096-f3d93a749a3b", "metadata": {}, "source": [ "**Question 9:**\n", "\n", "Define a function named `series_polarity_subjectivity` that applies the `polarity_subjectivity()` function defined in Question 7 to a Pandas Series (in the form of a dataframe column) and returns the results. Then use the function on the top 10 rows of our dataframe to see the results.\n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 59, "id": "4ec921d8-0706-4121-aa89-496fca33cf82", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 (0.18, 0.395)\n", "1 (0.014583333333333337, 0.4201388888888889)\n", "2 (-0.12291666666666666, 0.5145833333333333)\n", "3 (-0.24375000000000002, 0.65)\n", "4 (1.0, 0.3)\n", "5 (-0.1, 0.5)\n", "6 (-0.2, 0.0)\n", "7 (0.7, 0.6000000000000001)\n", "8 (-0.2, 0.5)\n", "9 (0.7, 0.8)\n", "Name: text, dtype: object" ] }, "execution_count": 59, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Define the function\n", "def series_polarity_subjectivity(series):\n", " return series.apply(polarity_subjectivity)\n", "\n", "# Apply to the top 10 rows of the df['text']\n", "series_polarity_subjectivity(df['text'].head(10))" ] }, { "cell_type": "markdown", "id": "f49cc565-9646-433e-a34b-85ba622694b3", "metadata": {}, "source": [ "## Measure of Complexity - Lexical Diversity\n", "\n", "As the name suggests, Lexical Diversity is a measurement of how many different lexical words there are in a given text and is formulaically defined as the number of unique tokens over the total number of tokens. The idea is that the more diverse lexical tokens in a text are, the more complex that text is expected to be. Let's look at an example. \n", "\n", "**Question 10:**\n", "\n", "Define a \"complexity\" function that accepts a string as an argument and returns the lexical complexity score defined as the number of unique tokens over the total number of tokens. Then apply the function to the top 10 rows of our dataframe. \n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 60, "id": "202d893c-02e5-440f-9836-c252170f510a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 0.722222\n", "1 0.952381\n", "2 0.848485\n", "3 1.000000\n", "4 1.000000\n", "5 0.814815\n", "6 1.000000\n", "7 0.941176\n", "8 1.000000\n", "9 0.909091\n", "Name: text, dtype: float64" ] }, "execution_count": 60, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def complexity(string):\n", " # Create a list of all tokens\n", " total_tokens = word_tokenize(string)\n", " \n", " # Create a set of all tokens (which only keeps unique values)\n", " unique_tokens = set(word_tokenize(string))\n", " \n", " # Return the complexity measure\n", " if len(total_tokens) == 0:\n", " return 0\n", " else:\n", " return len(unique_tokens) / len(total_tokens)\n", "\n", "# Apply to the top 10 rows of the dataframe\n", "df.text.head(10).apply(complexity)" ] }, { "cell_type": "markdown", "id": "804e9928-ae05-47e1-8440-d5c005157443", "metadata": {}, "source": [ "## Stopwords and Non-Alphabeticals\n", "\n", "If you recall in Question 3 we conducted a Frequency Distribution and the resulting 10 most common tokens were as follows: \n", "```\n", "[(',', 4), ('very', 3), ('A', 1), ('slow-moving', 1), ('aimless', 1), ('movie', 1), ('about', 1), ('a', 1), ('distressed', 1), ('drifting', 1)]\n", "```\n", "\n", "Some of these are not very helpful and are considered less significant compared to other tokens. For example, how much information can be gained from knowing that periods are quite common in a given text? An attempt at filtering out such less significant words so that the focus can be directed towards more significant words is called removal of the stopwords. Note that there is no universal definition of what these stopwords are and this designation is purely subjective. \n", "\n", "Let's look at some examples of English stopwords, as defined by `nltk`:" ] }, { "cell_type": "code", "execution_count": 61, "id": "0782d2e0-7d46-4021-99c8-b1f6abe23951", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', \"you're\", \"you've\", \"you'll\", \"you'd\", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his']\n" ] } ], "source": [ "# Import library\n", "from nltk.corpus import stopwords\n", "\n", "# Select only English stopwords\n", "english_stop_words = stopwords.words('english')\n", "\n", "# Print the first 20\n", "print(english_stop_words[:20])" ] }, { "cell_type": "markdown", "id": "20a8604e-255f-4683-a482-53ba6ac2ea91", "metadata": {}, "source": [ "**Question 11:**\n", "\n", "Define a function named \"stopword_remover\" that accepts a string as argument, tokenizes the input string, removes the English stopwords (as defined by `nltk`), and returns the tokens without the stopwords. Then apply the function to the top 5 rows of our dataframe.\n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 62, "id": "dc55726c-d595-48e4-9943-20e1450c04ce", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 [,, ,, slow-moving, ,, aimless, movie, distressed, ,, drifting, young, man, .]\n", "1 [sure, lost, -, flat, characters, audience, ,, nearly, half, walked, .]\n", "2 [Attempting, artiness, black, &, white, clever, camera, angles, ,, movie, disappointed, -, became, even, ridiculous, -, acting, poor, plot, lines, almost, non-existent, .]\n", "3 [little, music, anything, speak, .]\n", "4 [best, scene, movie, Gerardo, trying, find, song, keeps, running, head, .]\n", "Name: text, dtype: object" ] }, "execution_count": 62, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def stopword_remover(string):\n", " # Tokenize the string\n", " tokens = word_tokenize(string)\n", " \n", " # Create a list of English stopwords\n", " english_stopwords = stopwords.words('english')\n", " \n", " # Return non-stopwords\n", " return [w for w in tokens if w.lower() not in english_stopwords]\n", "\n", "# Apply to the top 5 rows of our df['text']\n", "df.text.head(5).apply(stopword_remover)" ] }, { "cell_type": "markdown", "id": "9525bf14-d0d7-4fd8-b4ce-c0a67c501571", "metadata": {}, "source": [ "Another group of tokens that we can consider filtering out, similar to stopwords, is the non-alphabeticals. As the name suggests, examples of non-alphabeticals are: `! % & # * $` (note that space is also considered a non-alphabetical). To help identify what is considered alphabetical or not, we can use `isalpha()`, which is a built-in Python function that checks whether all characters in a given string are alphabets or not. Let's look at a few examples to better understand this concept:" ] }, { "cell_type": "code", "execution_count": 63, "id": "8494228f-1ba8-4a9a-891d-50f023c151d1", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "String_1: True\n", "\n", "String_2: False\n", "\n", "String_3: False\n" ] } ], "source": [ "string_1 = \"TomAndJerryAreFun\"\n", "string_2 = \"Tom&JerryAreFun\"\n", "string_3 = \"TomAndJerryAreFun!\"\n", "\n", "print(f\"String_1: {string_1.isalpha()}\\n\")\n", "print(f\"String_2: {string_2.isalpha()}\\n\")\n", "print(f\"String_3: {string_3.isalpha()}\")" ] }, { "cell_type": "markdown", "id": "e8cbcc83-3658-4ba8-a037-5c9094e2d4b5", "metadata": {}, "source": [ "Let's look at each one to better understand what happened. The first one returned \"True\" indicating the string contains only alpabeticals. The second one returned \"False\", which was because of \"&\" and the third one also returned \"False\", driven by the \"!\".\n", "\n", "Now that we are familiar with how `isalpha()` works, let's use it in our example to further clean up our data.\n", "\n", "**Question 12:**\n", "\n", "Define a function named \"stopword_nonalpha_remover\" that accepts a string as an argument, removes both stopwords (using the `stopword_remover()` function that we defined in the previous question) and non-alphabeticals and then returns the remainder. Apply this function to the top 5 rows of our dataframe and visually compare to the outcome of the previous question (which still included the non-alphabeticals).\n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 64, "id": "95b507b9-f9e3-40c7-8805-897c15d266ba", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 [aimless, movie, distressed, drifting, young, man]\n", "1 [sure, lost, flat, characters, audience, nearly, half, walked]\n", "2 [Attempting, artiness, black, white, clever, camera, angles, movie, disappointed, became, even, ridiculous, acting, poor, plot, lines, almost]\n", "3 [little, music, anything, speak]\n", "4 [best, scene, movie, Gerardo, trying, find, song, keeps, running, head]\n", "Name: text, dtype: object" ] }, "execution_count": 64, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def stopword_nonalpha_remover(string):\n", " return [x for x in stopword_remover(string) if x.isalpha()]\n", "\n", "df.text.head().apply(stopword_nonalpha_remover)" ] }, { "cell_type": "markdown", "id": "53710503-12bb-44b9-b4d0-e99397a08dfa", "metadata": {}, "source": [ "As expected, the non-alphabeticals were removed, in addition to the stopwords. Therefore the tokens that are expected to have a higher significance, compared to the removed ones.\n", "\n", "In the next step, we will put together everything that we have learned so far to find out which reviews had the highest complexity score.\n", "\n", "**Question 13:**\n", "\n", "Define a function named \"complexity_cleaned\" that accepts a Series and removes the stopwords and non-alphabeticals (using the function defined in Question 12). Then create a column named \"complexity\" in our dataframe that uses the \"complexity_cleaned\" function to calculate the complexity. Finally, return the rows of the dataframe for the 10 largest complexity scores.\n", "\n", "**Answer:**" ] }, { "cell_type": "code", "execution_count": 65, "id": "0b8d1ec4-f0aa-4cb3-a2ee-21b3a627512a", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
textlabelcomplexity
0A very, very, very slow-moving, aimless movie about a distressed, drifting young man.01.0
484Kris Kristoffersen is good in this movie and really makes a difference.11.0
476Tom Wilkinson broke my heart at the end... and everyone else's judging by the amount of fumbling for hankies and hands going up to faces among males and females alike.11.0
477Julian Fellowes has triumphed again.11.0
478He's a national treasure.11.0
479GO AND SEE IT!11.0
480This is an excellent film.11.0
481The aerial scenes were well-done.11.0
482It was also the right balance of war and love.11.0
483The film gives meaning to the phrase, \"Never in the history of human conflict has so much been owed by so many to so few.11.0
\n", "
" ], "text/plain": [ " text \\\n", "0 A very, very, very slow-moving, aimless movie about a distressed, drifting young man. \n", "484 Kris Kristoffersen is good in this movie and really makes a difference. \n", "476 Tom Wilkinson broke my heart at the end... and everyone else's judging by the amount of fumbling for hankies and hands going up to faces among males and females alike. \n", "477 Julian Fellowes has triumphed again. \n", "478 He's a national treasure. \n", "479 GO AND SEE IT! \n", "480 This is an excellent film. \n", "481 The aerial scenes were well-done. \n", "482 It was also the right balance of war and love. \n", "483 The film gives meaning to the phrase, \"Never in the history of human conflict has so much been owed by so many to so few. \n", "\n", " label complexity \n", "0 0 1.0 \n", "484 1 1.0 \n", "476 1 1.0 \n", "477 1 1.0 \n", "478 1 1.0 \n", "479 1 1.0 \n", "480 1 1.0 \n", "481 1 1.0 \n", "482 1 1.0 \n", "483 1 1.0 " ] }, "execution_count": 65, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Define the complexity_cleaned function\n", "def complexity_cleaned(series):\n", " return series.apply(lambda x: complexity(' '.join(stopword_nonalpha_remover(x))))\n", "\n", "# Add 'complexity' column to the dataframe\n", "df['complexity'] = complexity_cleaned(df.text)\n", "\n", "# Return top 10 highest complexity scores\n", "df.sort_values('complexity', ascending = False).head(10)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 5 }