A famous website where you can search research papers. Very useful.
Just uncheck "include patents" and "include citations" to get only research papers.
Just create some projects on Gitlab or Github. Then put URLs of the projects on your CV.
Use stackoverflow when you face a problem. By searching words that are relevant to your problem, you can find many useful questions and answers. If you don't find any clue there, you can even create a new question for your problem.
Actually answering questions on stackoverflow is also a nice practice for programmers.
Actually the function "Dropout" is used to give some randomness to prevent the network to memorize the entire data. So this code without the dropout function works too:
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dense(512, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
So what we most care is the "Dense function". According to the documentation of Keras, the arguments for Dense function are like this:
The first argument "512" of the dense function is "units" of the layer. You can see this stackoverflow to learn what the "units" is. It just an "output shape" of the layer. The first layer is expected to output a shape with 512 neurons because 512 is given as units.
image 1
So if we give 512 for the units of the layer, the result of the layer is carried over to the next layer from the 512 neurons.
But why 512? Actually we don't know why we use 512. Maybe because 512 neurons treats the activation most efficiently? It is a kind of arbitrary number, which we must find by trial and error.
Video 1
If you see carefully, you will notice that only the last layer has "num_classes" (= 10) as units. The last layers has 10 as the units (or output shape) because the neural network is expected to give a number out of 10 numbers (namely 0, 1, 2, 3, 4, 5, 6, 7, 8, 9) at the end. So the last layer must have a 10 as output shape.
"input_shape" of the dense function
Only the first layers has the argument "input_shape". Why? That's because the layers after the first can guess what number should be the input shape from the previous output shape. What we must do is only giving what shape will be given to the first layer.
The samples of MNIST are images of handwritten numbers. Each image has 28 * 28 (=784) gray-scaled pixels like this:
28 * 28 pixels
"Relu" activation
What is activation in the first place? According to this page:
It’s just a thing function that you use to get the output of node. It is used to determine the output of neural network like yes or no. It maps the resulting values in between 0 to 1 or -1 to 1 etc. (depending upon the function). - SAGAR SHARMA, Towards Data Science
In the model used for MNIST, relu and softmax activation are used.
Relu means "Rectified Linear Unit". This is the most used activation function as it usually makes better results compared to other activation functions. Relu's advantage is "sparsity and a reduced likelihood of vanishing gradient" according to StackExchange, so it is used to define how the model learn during the training.
Softmax is used to transform arbitrary real values to probabilities, so softmax is used to change the output of the previous layer to probabilities. In fact, this is the layer to make the prediction.
You can see here for other available activation functions.
The conclusion
Like we have seen above, we can create the model of Keras like this:
model = Sequential()
// 28 * 28 pixels = 784 pixel
// 512 for the "output shape".
model.add(Dense(512, activation='relu', input_shape=(784,)))
// 512 for the "output shape".
model.add(Dense(512, activation='relu'))
// One of 10 numbers (0, 1, 2, 3 ... 9) must be chosen at the last layer
model.add(Dense(10, activation='softmax'))
But we can make the model like this too:
model = Sequential()
model.add(Dense(300, activation='relu', input_shape=(784,)))
// Three hidden layers with 300 neurons!
// Why 300? I don't know why, but it might work!
model.add(Dense(300, activation='relu'))
model.add(Dense(300, activation='relu'))
model.add(Dense(300, activation='relu'))
model.add(Dense(10, activation='softmax'))
And, though this is meaningless, you can make each layer with 1 neuron if you want:
But the first layer's input shape and the last layer's output shape can not be changed in any case. They must be always consistent.
Also you can change "relu" to other functions like "selu", but "softmax" can not be exchanged to other function as it is the function used to get probabilities. According to StackExchange, "the sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression". In the above example, there are 10 cases (0, 1, 2, ..., 9), so "sigmoid" can not be used for it. You must use "softmax" for the example above.
In this post, we will see how to debug Electron-vue app in vscode. At first, write the following in launch.js of .vscode folder.
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "attach",
"name": "Attach to electron main",
"port": 5858,
"timeout": 30000,
"sourceMaps": true,
"outFiles": [
"${workspaceRoot}/src/main/index.js"
]
}
]
}
Then add debugger where you want to add a breakpoint.
Please note that you can debug only the main process, not the renderer process.
Debug
After adding the debugger in the code, start the debug mode. Then start the app by $ npm run dev. You will see the process will be stopped in the compiled js file.
$ sbt
[info] Loading project definition from /home/user/project
[info] Set current project to shu (in build file:/home/user/)
[info] sbt server started at local:///home/user/.sbt/1.0/server/9a48bc25b5f71ce94d5c/sock
sbt:user>
To check the version:
$ sbt "show sbtVersion"
[info] Loading project definition from /home/user/project
[info] Set current project to shu (in build file:/home/shu/)
[info] 1.2.8
Install Scala support on VScode
Install "Scala Language Server" on your vscode:
Create a new project
And run these commands on the vscode terminal:
$ cd {path of directory in which you want to save the project}
$ sbt new sbt/scala-seed.g8
You will be asked what name you will give to the project. I named it "scala-test". And a new scala project is generated. Grab the project and drag and drop on the vscode.
If it starts compiling, wait until the compiling finishes.
Hello World
Then run this command on the vscode terminal to see if you can run the project:
$ cd ./scala-test //(or your project name that you just gave)
$ sbt run
If you see this, it means you succeeded to compile and run the project. You are ready to code in Scala.
$ sbt run
[info] Loading project definition from /home/shu/user/scala-test/project
[info] Updating ProjectRef(uri("file:/home/shu/user/scala-test/project/"), "scala-test-build")...
[info] Done updating.
[info] Compiling 1 Scala source to /home/shu/user/scala-test/project/target/scala-2.12/sbt-1.0/classes ...
[info] Done compiling.
[info] Loading settings for project root from build.sbt ...
[info] Set current project to scala test (in build file:/home/shu/user/scala-test/)
[info] Updating ...
[info] Done updating.
[info] Compiling 1 Scala source to /home/shu/user/scala-test/target/scala-2.12/classes ...
[info] Done compiling.
[info] Packaging /home/shu/user/scala-test/target/scala-2.12/scala-test_2.12-0.1.0-SNAPSHOT.jar ...
[info] Done packaging.
[info] Running example.Hello
hello
[success] Total time: 2 s, completed Jun 7, 2019 11:46:32 PM