Created
May 19, 2019 13:36
-
-
Save junkgear/7931b2bb792cd81c739d0a5d8828b5e9 to your computer and use it in GitHub Desktop.
Revisions
-
junkgear created this gist
May 19, 2019 .There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,982 @@ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "<a href=\"https://www.bigdatauniversity.com\"><img src=\"https://ibm.box.com/shared/static/qo20b88v1hbjztubt06609ovs85q8fau.png\" width=\"400px\" align=\"center\"></a>\n", "\n", "<h1 align=\"center\"><font size=\"5\">TENSORFLOW'S HELLO WORLD</font></h1>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n", "<font size = 3><strong>In this notebook we will overview the basics of TensorFlow, learn it's structure and see what is the motivation to use it</strong></font>\n", "<br>\n", "<h2>Table of Contents</h2>\n", "<ol>\n", " <li><a href=\"#ref2\">How does TensorFlow work?</a></li>\n", " <li><a href=\"#ref3\">Building a Graph</a></li>\n", " <li><a href=\"#ref4\">Defining multidimensional arrays using TensorFlow</a></li>\n", " <li><a href=\"#ref5\">Why Tensors?</a></li>\n", " <li><a href=\"#ref6\">Variables</a></li>\n", " <li><a href=\"#ref7\">Placeholders</a></li>\n", " <li><a href=\"#ref8\">Operations</a></li>\n", "</ol>\n", "<p></p>\n", "</div>\n", "<br>\n", "\n", "<hr>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id=\"ref2\"></a>\n", "<h2>How does TensorFlow work?</h2>\n", "TensorFlow defines computations as Graphs, and these are made with operations (also know as “ops”). So, when we work with TensorFlow, it is the same as defining a series of operations in a Graph.\n", "\n", "To execute these operations as computations, we must launch the Graph into a Session. The session translates and passes the operations represented into the graphs to the device you want to execute them on, be it a GPU or CPU. In fact, TensorFlow's capability to execute the code on different devices such as CPUs and GPUs is a consequence of it's specific structure.\n", "\n", "For example, the image below represents a graph in TensorFlow. <b>W</b>, <b>x</b> and b are tensors over the edges of this graph. <b>MatMul</b> is an operation over the tensors <b>W</b> and <b>x</b>, after that <b>Add</b> is called and add the result of the previous operator with <b>b</b>. The resultant tensors of each operation cross the next one until the end where it's possible to get the wanted result." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<img src='https://ibm.box.com/shared/static/a94cgezzwbkrq02jzfjjljrcaozu5s2q.png'>\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<h2>Importing TensorFlow</h2>\n", "<p>To use TensorFlow, we need to import the library. We imported it and optionally gave it the name \"tf\", so the modules can be accessed by <b>tf.module-name</b>:</p>" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import tensorflow as tf" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "-----------------" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id=\"ref3\"></a>\n", "# Building a Graph\n", "\n", "As we said before, TensorFlow works as a graph computational model. Let's create our first graph which we named as <b>graph1</b>." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "graph1 = tf.Graph()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we call the TensorFlow functions that construct new <b>tf.Operation</b> and <b>tf.Tensor</b> objects and add them to the <b>graph1</b>. As mentioned, each <b>tf.Operation</b> is a <b>node</b> and each <b>tf.Tensor</b> is an edge in the graph.\n", "\n", "Lets add 2 constants to our graph. For example, calling tf.constant([2], name = 'constant_a') adds a single <b>tf.Operation</b> to the default graph. This operation produces the value 2, and returns a <b>tf.Tensor</b> that represents the value of the constant. \n", "<b>Notice:</b> tf.constant([2], name=\"constant_a\") creates a new tf.Operation named \"constant_a\" and returns a tf.Tensor named \"constant_a:0\"." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "with graph1.as_default():\n", " a = tf.constant([2], name = 'constant_a')\n", " b = tf.constant([3], name = 'constant_b')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets look at the tensor __a__." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "<tf.Tensor 'constant_a:0' shape=(1,) dtype=int32>" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "a" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, it just show the name, shape and type of the tensor in the graph. We will see it's value when we run it in a TensorFlow session." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[2]\n" ] } ], "source": [ "# Printing the value of a\n", "sess = tf.Session(graph = graph1)\n", "result = sess.run(a)\n", "print(result)\n", "sess.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After that, let's make an operation over these tensors. The function <b>tf.add()</b> adds two tensors (you could also use `c = a + b`). " ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true }, "outputs": [], "source": [ "with graph1.as_default():\n", " c = tf.add(a, b)\n", " #c = a + b is also a way to define the sum of the terms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then TensorFlow needs to initialize a session to run our code. Sessions are, in a way, a context for creating a graph inside TensorFlow. Let's define our session:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": true }, "outputs": [], "source": [ "sess = tf.Session(graph = graph1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's run the session to get the result from the previous defined 'c' operation:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[5]\n" ] } ], "source": [ "result = sess.run(c)\n", "print(result)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Close the session to release resources:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true }, "outputs": [], "source": [ "sess.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To avoid having to close sessions every time, we can define them in a <b>with</b> block, so after running the <b>with</b> block the session will close automatically:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[5]\n" ] } ], "source": [ "with tf.Session(graph = graph1) as sess:\n", " result = sess.run(c)\n", " print(result)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Even this silly example of adding 2 constants to reach a simple result defines the basis of TensorFlow. Define your operations (In this case our constants and _tf.add_), and start a session to build a graph." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<h3>What is the meaning of Tensor?</h3>\n", "\n", "<div class=\"alert alert-success alertsuccess\" style=\"margin-top: 20px\">\n", "<font size = 3><strong>In TensorFlow all data is passed between operations in a computation graph, and these are passed in the form of Tensors, hence the name of TensorFlow.</strong></font>\n", "<br>\n", "<br>\n", " The word <b>tensor</b> from new latin means \"that which stretches\". It is a mathematical object that is named \"tensor\" because an early application of tensors was the study of materials stretching under tension. The contemporary meaning of tensors can be taken as multidimensional arrays. \n", "\n", "\n", "\n", "</div>\n", "\n", "That's great, but... what are these multidimensional arrays? \n", "\n", "Going back a little bit to physics to understand the concept of dimensions:<br>\n", "<img src=\"https://ibm.box.com/shared/static/ymn0hl3hf8s3xb4k15v22y5vmuodnue1.svg\"/>\n", "<div style=\"text-align:center\"><a href=\"https://en.wikipedia.org/wiki/Dimension\">Image Source</a></div>\n", "<br>\n", "\n", "The zero dimension can be seen as a point, a single object or a single item.\n", "\n", "The first dimension can be seen as a line, a one-dimensional array can be seen as numbers along this line, or as points along the line. One dimension can contain infinite zero dimension/points elements.\n", "\n", "The second dimension can be seen as a surface, a two-dimensional array can be seen as an infinite series of lines along an infinite line. \n", "\n", "The third dimension can be seen as volume, a three-dimensional array can be seen as an infinite series of surfaces along an infinite line.\n", "\n", "The Fourth dimension can be seen as the hyperspace or spacetime, a volume varying through time, or an infinite series of volumes along an infinite line. And so forth on..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As mathematical objects: <br><br>\n", "<img src=\"https://ibm.box.com/shared/static/kmxz570uai8eeg6i6ynqdz6kmlx1m422.png\">\n", "<div style=\"text-align: center\"><a href=\"https://book.mql4.com/variables/arrays\">Image Source</a></div>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Summarizing:<br><br>\n", "<table style=\"width:100%\">\n", " <tr>\n", " <td><b>Dimension</b></td>\n", " <td><b>Physical Representation</b></td> \n", " <td><b>Mathematical Object</b></td>\n", " <td><b>In Code</b></td>\n", " </tr>\n", " \n", " <tr>\n", " <td>Zero </td>\n", " <td>Point</td> \n", " <td>Scalar (Single Number)</td>\n", " <td>[ 1 ]</td>\n", " </tr>\n", "\n", " <tr>\n", " <td>One</td>\n", " <td>Line</td> \n", " <td>Vector (Series of Numbers) </td>\n", " <td>[ 1,2,3,4,... ]</td>\n", " </tr>\n", " \n", " <tr>\n", " <td>Two</td>\n", " <td>Surface</td> \n", " <td>Matrix (Table of Numbers)</td>\n", " <td>[ [1,2,3,4,...], [1,2,3,4,...], [1,2,3,4,...],... ]</td>\n", " </tr>\n", " \n", " <tr>\n", " <td>Three</td>\n", " <td>Volume</td> \n", " <td>Tensor (Cube of Numbers)</td>\n", " <td>[ [[1,2,...], [1,2,...], [1,2,...],...], [[1,2,...], [1,2,...], [1,2,...],...], [[1,2,...], [1,2,...], [1,2,...] ,...]... ]</td>\n", " </tr>\n", " \n", "</table>\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "-----------------" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id=\"ref4\"></a>\n", "<h2>Defining multidimensional arrays using TensorFlow</h2>\n", "Now we will try to define such arrays using TensorFlow:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Scalar (1 entry):\n", " 2 \n", "\n", "Vector (3 entries) :\n", " [5 6 2] \n", "\n", "Matrix (3x3 entries):\n", " [[1 2 3]\n", " [2 3 4]\n", " [3 4 5]] \n", "\n", "Tensor (3x3x3 entries) :\n", " [[[ 1 2 3]\n", " [ 2 3 4]\n", " [ 3 4 5]]\n", "\n", " [[ 4 5 6]\n", " [ 5 6 7]\n", " [ 6 7 8]]\n", "\n", " [[ 7 8 9]\n", " [ 8 9 10]\n", " [ 9 10 11]]] \n", "\n" ] } ], "source": [ "graph2 = tf.Graph()\n", "with graph2.as_default():\n", " Scalar = tf.constant(2)\n", " Vector = tf.constant([5,6,2])\n", " Matrix = tf.constant([[1,2,3],[2,3,4],[3,4,5]])\n", " Tensor = tf.constant( [ [[1,2,3],[2,3,4],[3,4,5]] , [[4,5,6],[5,6,7],[6,7,8]] , [[7,8,9],[8,9,10],[9,10,11]] ] )\n", "with tf.Session(graph = graph2) as sess:\n", " result = sess.run(Scalar)\n", " print (\"Scalar (1 entry):\\n %s \\n\" % result)\n", " result = sess.run(Vector)\n", " print (\"Vector (3 entries) :\\n %s \\n\" % result)\n", " result = sess.run(Matrix)\n", " print (\"Matrix (3x3 entries):\\n %s \\n\" % result)\n", " result = sess.run(Tensor)\n", " print (\"Tensor (3x3x3 entries) :\\n %s \\n\" % result)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<b>tf.shape</b> returns the shape of our data structure." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "TensorShape([])" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Scalar.shape" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "TensorShape([Dimension(3), Dimension(3), Dimension(3)])" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "Tensor.shape" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that you understand these data structures, I encourage you to play with them using some previous functions to see how they will behave, according to their structure types:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Defined using tensorflow function :\n", "[[3 4 5]\n", " [4 5 6]\n", " [5 6 7]]\n", "Defined using normal expressions :\n", "[[3 4 5]\n", " [4 5 6]\n", " [5 6 7]]\n" ] } ], "source": [ "graph3 = tf.Graph()\n", "with graph3.as_default():\n", " Matrix_one = tf.constant([[1,2,3],[2,3,4],[3,4,5]])\n", " Matrix_two = tf.constant([[2,2,2],[2,2,2],[2,2,2]])\n", "\n", " add_1_operation = tf.add(Matrix_one, Matrix_two)\n", " add_2_operation = Matrix_one + Matrix_two\n", "\n", "with tf.Session(graph =graph3) as sess:\n", " result = sess.run(add_1_operation)\n", " print (\"Defined using tensorflow function :\")\n", " print(result)\n", " result = sess.run(add_2_operation)\n", " print (\"Defined using normal expressions :\")\n", " print(result)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the regular symbol definition and also the TensorFlow function we were able to get an element-wise multiplication, also known as Hadamard product. <br>\n", "\n", "But what if we want the regular matrix product?\n", "\n", "We then need to use another TensorFlow function called <b>tf.matmul()<b>:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Defined using tensorflow function :\n", "[[13 18]\n", " [18 25]]\n" ] } ], "source": [ "graph4 = tf.Graph()\n", "with graph4.as_default():\n", " Matrix_one = tf.constant([[2,3],[3,4]])\n", " Matrix_two = tf.constant([[2,3],[3,4]])\n", "\n", " mul_operation = tf.matmul(Matrix_one, Matrix_two)\n", "\n", "with tf.Session(graph = graph4) as sess:\n", " result = sess.run(mul_operation)\n", " print (\"Defined using tensorflow function :\")\n", " print(result)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We could also define this multiplication ourselves, but there is a function that already does that, so no need to reinvent the wheel!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "-----------------" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id=\"ref5\"></a>\n", "<h2>Why Tensors?</h2>\n", "\n", "The Tensor structure helps us by giving the freedom to shape the dataset in the way we want.\n", "\n", "And it is particularly helpful when dealing with images, due to the nature of how information in images are encoded,\n", "\n", "Thinking about images, its easy to understand that it has a height and width, so it would make sense to represent the information contained in it with a two dimensional structure (a matrix)... until you remember that images have colors, and to add information about the colors, we need another dimension, and thats when Tensors become particularly helpful.\n", "\n", "Images are encoded into color channels, the image data is represented into each color intensity in a color channel at a given point, the most common one being RGB, which means Red, Blue and Green. The information contained into an image is the intensity of each channel color into the width and height of the image, just like this:\n", "\n", "<img src='https://ibm.box.com/shared/static/xlpv9h5xws248c09k1rlx7cer69y4grh.png'>\n", "<a href=\"https://msdn.microsoft.com/en-us/library/windows/desktop/dn424131.aspx\">Image Source</a>\n", "\n", "So the intensity of the red channel at each point with width and height can be represented into a matrix, the same goes for the blue and green channels, so we end up having three matrices, and when these are combined they form a tensor. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "-----------------" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id=\"ref6\"></a>\n", "# Variables\n", "\n", "Now that we are more familiar with the structure of data, we will take a look at how TensorFlow handles variables.\n", "<b>First of all, having tensors, why do we need variables?</b> \n", "TensorFlow variables are used to share and persistent some stats that are manipulated by our program. That is, when you define a variable, TensorFlow adds a <b>tf.Operation</b> to your graph. Then, this operation will store a writable tensor value that persists between tf.Session.run calls. So, you can update the value of a variable through each run, while you cannot update tensor (e.g a tensor created by tf.constant()) through multiple runs in a session. \n", "\n", "<b>How to define a variable?</b> \n", "To define variables we use the command <b>tf.Variable()</b>.\n", "To be able to use variables in a computation graph it is necessary to initialize them before running the graph in a session. This is done by running <b>tf.global_variables_initializer()</b>.\n", "\n", "To update the value of a variable, we simply run an assign operation that assigns a value to the variable:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": true }, "outputs": [], "source": [ "v = tf.Variable(0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's first create a simple counter, a variable that increases one unit at a time:\n", "\n", "To do this we use the <b>tf.assign(reference_variable, value_to_update)</b> command. <b>tf.assign</b> takes in two arguments, the <b>reference_variable</b> to update, and assign it to the <b>value_to_update</b> it by." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": true }, "outputs": [], "source": [ "update = tf.assign(v, v+1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Variables must be initialized by running an initialization operation after having launched the graph. We first have to add the initialization operation to the graph:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "init_op = tf.global_variables_initializer()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We then start a session to run the graph, first initialize the variables, then print the initial value of the <b>state</b> variable, and then run the operation of updating the <b>state</b> variable and printing the result after each update:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "with tf.Session() as session:\n", " session.run(init_op)\n", " print(session.run(v))\n", " for _ in range(3):\n", " session.run(update)\n", " print(session.run(v))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "-----------------" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id=\"ref7\"></a>\n", "# Placeholders\n", "\n", "Now we know how to manipulate variables inside TensorFlow graph, but what about feeding data outside of a TensorFlow graph? \n", "\n", "If you want to feed data to a TensorFlow graph from outside a graph, you will need to use placeholders.\n", "\n", "So <b>what are these placeholders and what do they do?</b> \n", "\n", "Placeholders can be seen as \"holes\" in your model, \"holes\" which you will pass the data to, you can create them using <br> <b>tf.placeholder(_datatype_)</b>, where <b>_datatype_</b> specifies the type of data (integers, floating points, strings, booleans) along with its precision (8, 16, 32, 64) bits.\n", "\n", "The definition of each data type with the respective python syntax is defined as:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "|Data type\t|Python type|Description|\n", "| --------- | --------- | --------- |\n", "|DT_FLOAT\t|tf.float32\t|32 bits floating point.|\n", "|DT_DOUBLE\t|tf.float64\t|64 bits floating point.|\n", "|DT_INT8\t|tf.int8\t|8 bits signed integer.|\n", "|DT_INT16\t|tf.int16\t|16 bits signed integer.|\n", "|DT_INT32\t|tf.int32\t|32 bits signed integer.|\n", "|DT_INT64\t|tf.int64\t|64 bits signed integer.|\n", "|DT_UINT8\t|tf.uint8\t|8 bits unsigned integer.|\n", "|DT_STRING\t|tf.string\t|Variable length byte arrays. Each element of a Tensor is a byte array.|\n", "|DT_BOOL\t|tf.bool\t|Boolean.|\n", "|DT_COMPLEX64\t|tf.complex64\t|Complex number made of two 32 bits floating points: real and imaginary parts.|\n", "|DT_COMPLEX128\t|tf.complex128\t|Complex number made of two 64 bits floating points: real and imaginary parts.|\n", "|DT_QINT8\t|tf.qint8\t|8 bits signed integer used in quantized Ops.|\n", "|DT_QINT32\t|tf.qint32\t|32 bits signed integer used in quantized Ops.|\n", "|DT_QUINT8\t|tf.quint8\t|8 bits unsigned integer used in quantized Ops.|\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we create a placeholder:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "a = tf.placeholder(tf.float32)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And define a simple multiplication operation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "b = a * 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we need to define and run the session, but since we created a \"hole\" in the model to pass the data, when we initialize the session we are obligated to pass an argument with the data, otherwise we would get an error.\n", "\n", "To pass the data into the model we call the session with an extra argument <b>feed_dict</b> in which we should pass a dictionary with each placeholder name followed by its respective data, just like this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "with tf.Session() as sess:\n", " result = sess.run(b,feed_dict={a:3.5})\n", " print (result)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since data in TensorFlow is passed in form of multidimensional arrays we can pass any kind of tensor through the placeholders to get the answer to the simple multiplication operation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "dictionary={a: [ [ [1,2,3],[4,5,6],[7,8,9],[10,11,12] ] , [ [13,14,15],[16,17,18],[19,20,21],[22,23,24] ] ] }\n", "\n", "with tf.Session() as sess:\n", " result = sess.run(b,feed_dict=dictionary)\n", " print (result)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<hr>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<a id=\"ref8\"></a>\n", "<hr>Operations</h2>\n", "\n", "Operations are nodes that represent the mathematical operations over the tensors on a graph. These operations can be any kind of functions, like add and subtract tensor or maybe an activation function.\n", "\n", "<b>tf.constant</b>, <b>tf.matmul</b>, <b>tf.add</b>, <b>tf.nn.sigmoid</b> are some of the operations in TensorFlow. These are like functions in python but operate directly over tensors and each one does a specific thing. \n", "<div class=\"alert alert-success alertsuccess\" style=\"margin-top: 20px\">Other operations can be easily found in: <a href=\"https://www.tensorflow.org/versions/r0.9/api_docs/python/index.html\">https://www.tensorflow.org/versions/r0.9/api_docs/python/index.html</a></div>" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [ "graph5 = tf.Graph()\n", "with graph5.as_default():\n", " a = tf.constant([5])\n", " b = tf.constant([2])\n", " c = tf.add(a,b)\n", " d = tf.subtract(a,b)\n", "\n", "with tf.Session(graph = graph5) as sess:\n", " result = sess.run(c)\n", " print ('c =: %s' % result)\n", " result = sess.run(d)\n", " print ('d =: %s' % result)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<b>tf.nn.sigmoid</b> is an activation function, it's a little more complicated, but this function helps learning models to evaluate what kind of information is good or not." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "-----------------" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Want to learn more?\n", "\n", "Running deep learning programs usually needs a high performance platform. __PowerAI__ speeds up deep learning and AI. Built on IBM’s Power Systems, __PowerAI__ is a scalable software platform that accelerates deep learning and AI with blazing performance for individual users or enterprises. The __PowerAI__ platform supports popular machine learning libraries and dependencies including TensorFlow, Caffe, Torch, and Theano. You can use [PowerAI on IMB Cloud](https://cocl.us/ML0120EN_PAI).\n", "\n", "Also, you can use __Watson Studio__ to run these notebooks faster with bigger datasets.__Watson Studio__ is IBM’s leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, __Watson Studio__ enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of __Watson Studio__ users today with a free account at [Watson Studio](https://cocl.us/ML0120EN_DSX).This is the end of this lesson. Thank you for reading this notebook, and good luck on your studies." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Thanks for completing this lesson!\n", "\n", "Notebook created by: <a href=\"https://linkedin.com/in/saeedaghabozorgi\"> Saeed Aghabozorgi </a> and <a href=\"https://ca.linkedin.com/in/rafaelblsilva\"> Rafael Belo Da Silva </a></h4> " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### References:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "https://www.tensorflow.org/versions/r0.9/get_started/index.html<br>\n", "http://jrmeyer.github.io/tutorial/2016/02/01/TensorFlow-Tutorial.html<br>\n", "https://www.tensorflow.org/versions/r0.9/api_docs/python/index.html<br>\n", "<a href=\"https://www.tensorflow.org/api_docs/python/\">https://www.tensorflow.org/versions/r0.9/resources/dims_types.html</a><br>\n", "https://en.wikipedia.org/wiki/Dimension<br>\n", "https://book.mql4.com/variables/arrays<br>\n", "https://msdn.microsoft.com/en-us/library/windows/desktop/dn424131(v=vs.85).aspx<br>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "<hr>\n", "\n", "Copyright © 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8" }, "widgets": { "state": {}, "version": "1.1.2" } }, "nbformat": 4, "nbformat_minor": 2 }