Sunday, September 29, 2019

Tech guide

Google scholar
A famous website where you can search research papers. Very useful.
Just uncheck "include patents" and "include citations" to get only research papers.

Google Tech Dev Guide 
It says it is for students, but actually it is for everyone who wants to learn and improve the skill and the knowledge.

Free courses on open university website
You might find some interesting free lectures.

Free text books
You might find some interesting free textbooks. The books are with ads, but written by professionals.

Sci hub
https://en.wikipedia.org/wiki/Sci-Hub

To be a programmer/engineer


Just create some projects on Gitlab or Github. Then put URLs of the projects on your CV.
Use stackoverflow when you face a problem. By searching words that are relevant to your problem, you can find many useful questions and answers. If you don't find any clue there, you can even create a new question for your problem.

Actually answering questions on stackoverflow is also a nice practice for programmers.

Saturday, July 13, 2019

How to create a model for Keras

Create Models of Keras

In this post, I will explain how to create a model for Keras. At first, we will see the MNIST example from the GitHub.
The code is like this:
from __future__ import print_function

import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import RMSprop

batch_size = 128
num_classes = 10
epochs = 20

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))

model.summary()

model.compile(loss='categorical_crossentropy',
              optimizer=RMSprop(),
              metrics=['accuracy'])

history = model.fit(x_train, y_train,
                    batch_size=batch_size,
                    epochs=epochs,
                    verbose=1,
                    validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
The model is created like this in the code :
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))

Dense function

Actually the function "Dropout" is used to give some randomness to prevent the network to memorize the entire data. So this code without the dropout function works too:
model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dense(512, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
So what we most care is the "Dense function". According to the documentation of Keras, the arguments for Dense function are like this:
keras.layers.Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)
The first argument "512" of the dense function is "units" of the layer. You can see this stackoverflow to learn what the "units" is. It just an "output shape" of the layer. The first layer is expected to output a shape with 512 neurons because 512 is given as units.

 image 1

So if we give 512 for the units of the layer, the result of the layer is carried over to the next layer from the 512 neurons.  

But why 512? Actually we don't know why we use 512. Maybe because 512 neurons treats the activation most efficiently? It is a kind of arbitrary number, which we must find by trial and error.

Video 1

If you see carefully, you will notice that only the last layer has "num_classes" (= 10) as units. The last layers has 10 as the units (or output shape) because the neural network is expected to give a number out of 10 numbers (namely 0, 1, 2, 3, 4, 5, 6, 7, 8, 9) at the end. So the last layer must have a 10 as output shape. 

"input_shape" of the dense function

Only the first layers has the argument "input_shape". Why? That's because the layers after the first can guess what number should be the input shape from the previous output shape. What we must do is only giving what shape will be given to the first layer. 

The samples of MNIST are images of handwritten numbers. Each image has 28 * 28 (=784) gray-scaled pixels like this:


28 * 28 pixels

"Relu" activation

What is activation in the first place? According to this page:
It’s just a thing function that you use to get the output of node. It is used to determine the output of neural network like yes or no. It maps the resulting values in between 0 to 1 or -1 to 1 etc. (depending upon the function).
- SAGAR SHARMA, Towards Data Science
In the model used for MNIST, relu and softmax activation are used.

model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))

Relu means "Rectified Linear Unit". This is the most used activation function as it usually makes better results compared to other activation functions. Relu's advantage is "sparsity and a reduced likelihood of vanishing gradient" according to StackExchange, so it is used to define how the model learn during the training.

Softmax is used to transform arbitrary real values to probabilities, so softmax is used to change the output of the previous layer to probabilities. In fact, this is the layer to make the prediction.

You can see here for other available activation functions.

The conclusion

Like we have seen above, we can create the model of Keras like this: 

model = Sequential()
// 28 * 28 pixels = 784 pixel
// 512 for the "output shape".
model.add(Dense(512, activation='relu', input_shape=(784,))) 
// 512 for the "output shape".
model.add(Dense(512, activation='relu'))
// One of 10 numbers (0, 1, 2, 3 ... 9) must be chosen at the last layer 
model.add(Dense(10, activation='softmax'))

But we can make the model like this too:

model = Sequential()
model.add(Dense(300, activation='relu', input_shape=(784,)))
// Three hidden layers with 300 neurons!
// Why 300? I don't know why, but it might work! 
model.add(Dense(300, activation='relu'))
model.add(Dense(300, activation='relu'))
model.add(Dense(300, activation='relu'))
model.add(Dense(10, activation='softmax'))

And, though this is meaningless, you can make each layer with 1 neuron if you want:

model = Sequential()
model.add(Dense(1, activation='relu', input_shape=(784,))) 
model.add(Dense(1, activation='relu'))
model.add(Dense(1, activation='relu'))
model.add(Dense(1, activation='relu'))
model.add(Dense(1, activation='relu')) // 4 hidden layers with 1 neuron!
model.add(Dense(10, activation='softmax'))

But the first layer's input shape and the last layer's output shape can not be changed in any case. They must be always consistent.

Also you can change "relu" to other functions like "selu", but "softmax" can not be exchanged to other function as it is the function used to get probabilities. According to StackExchange, "the sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression". In the above example, there are 10 cases (0, 1, 2, ..., 9), so "sigmoid" can not be used for it. You must use "softmax" for the example above.

Saturday, June 15, 2019

How to debug Electron-vue in vscode

In this post, we will see how to debug Electron-vue app in vscode. At first, write the following in launch.js of .vscode folder.
{
   // Use IntelliSense to learn about possible attributes.
   // Hover to view descriptions of existing attributes.
   // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
   "version": "0.2.0",
   "configurations": [
       {
           "type": "node",
           "request": "attach",
           "name": "Attach to electron main",
           "port": 5858,
           "timeout": 30000,
           "sourceMaps": true,
           "outFiles": [
             "${workspaceRoot}/src/main/index.js"
           ]
       }
   ]
}
Then add debugger where you want to add a breakpoint.
function test (testVar) {
  debugger
  return readFile(testVar)
}
Please note that you can debug only the main process, not the renderer process.

Debug

After adding the debugger in the code, start the debug mode. Then start the app by $ npm run dev. You will see the process will be stopped in the compiled js file.

References

Friday, June 7, 2019

Start Scala programming with VScode in Ubuntu

Scala, VScode, Ubuntu

I will explain how to start Scala with VScode on Ubuntu in this post.
Versions are:
  • Scala: 2.11.12
  • VScode: 1.34
  • Ubuntu: 18.04.2 LTS

Install sbt

At first check the latest version of sbt from here:
http://dl.bintray.com/sbt/debian/
It was 1.2.8 for me as of now, so install sbt 1.2.8 from there:
$ curl -L -o sbt.deb http://dl.bintray.com/sbt/debian/sbt-1.2.8.deb
$ sudo dpkg -i sbt.deb
$ sudo apt-get update
$ sudo apt-get install sbt
Now you can run sbt:
$ sbt
[info] Loading project definition from /home/user/project
[info] Set current project to shu (in build file:/home/user/)
[info] sbt server started at local:///home/user/.sbt/1.0/server/9a48bc25b5f71ce94d5c/sock
sbt:user> 
To check the version:
$ sbt "show sbtVersion"
[info] Loading project definition from /home/user/project
[info] Set current project to shu (in build file:/home/shu/)
[info] 1.2.8

Install Scala support on VScode

Install "Scala Language Server" on your vscode:


Create a new project

And run these commands on the vscode terminal:
$ cd {path of directory in which you want to save the project}
$ sbt new sbt/scala-seed.g8
You will be asked what name you will give to the project. I named it "scala-test". And a new scala project is generated. Grab the project and drag and drop on the vscode.

If it starts compiling, wait until the compiling finishes.

Hello World

Then run this command on the vscode terminal to see if you can run the project:
$ cd ./scala-test //(or your project name that you just gave)
$ sbt run

If you see this, it means you succeeded to compile and run the project. You are ready to code in Scala.
$ sbt run
[info] Loading project definition from /home/shu/user/scala-test/project
[info] Updating ProjectRef(uri("file:/home/shu/user/scala-test/project/"), "scala-test-build")...
[info] Done updating.
[info] Compiling 1 Scala source to /home/shu/user/scala-test/project/target/scala-2.12/sbt-1.0/classes ...
[info] Done compiling.
[info] Loading settings for project root from build.sbt ...
[info] Set current project to scala test (in build file:/home/shu/user/scala-test/)
[info] Updating ...
[info] Done updating.
[info] Compiling 1 Scala source to /home/shu/user/scala-test/target/scala-2.12/classes ...
[info] Done compiling.
[info] Packaging /home/shu/user/scala-test/target/scala-2.12/scala-test_2.12-0.1.0-SNAPSHOT.jar ...
[info] Done packaging.
[info] Running example.Hello 
hello
[success] Total time: 2 s, completed Jun 7, 2019 11:46:32 PM

References

Monday, April 1, 2019

Sine, Cosine, Tangent


Definition

  • Opposite is the opposite line to the angle θ.
  • Adjacent is the adjacent line to the angle θ.
  • Tangent is the opposite line to the right angle.
  • The triangle must be a right triangle (also known as "right-angled triangle").
Sine, Cosine, Tangent are defined as






Calculation

When the following triangle is given,

Sine, Cosine, Tangent are calculated as 



Ratio

When θ is 30°, the ratio of the lengths of the lines are:
Opposite : Adjacent : Hypotenuse = 1 : √3 : 2
When θ is 45°, the ratio of the lengths of the lines are:
Opposite : Adjacent : Hypotenuse = 1 : 1 : √2


Transration

Sine, Cosine, Tangent can be translated to each other:

Or


Or


Or


Law of sines

Law of sines:
image is from wikipedia "Law of sines"



where a, b, and c are the lengths of the sides of a triangle, and A, B, and C are the opposite angles. R is the radius of the triangle's circumcircle. 

Law of cosines

Law of cosines:
image is from wikipedia "Law of cosines"



where γ denotes the angle contained between sides of lengths a and b and opposite the side of length c.

For the same figure, the other two relations are analogous:


Law of tangents

Law of tangents:
image is from wikipedia "Law of tangent"



where a, b, and c are the lengths of the three sides of the triangle, and α, β, and γ are the angles opposite those three respective sides.

References

Wikipedia, "Law of sines", 2019 July 20th visited
Wikipedia, "Law of cosines", 2019 July 20th visited
Wikipedia, "Law of tangents", 2019 July 20th visited
Math is fun "Sine, Cosine and Tangent", 2019 July 20th visited

Manipulations of int and String in Java

At first, declare a variable as int:
int n = {your number};
To remove a right most digit:
n = n / 10;
To check if right most digit is 8:
n % 10 == 8;
If we declare a String variable:
String str = "Hello, test here.";
The last character of the String can be gotten like this:
String lastChar = str.substring(str.length() - 1);
To remove the last character of the String:
String str = str.substring(0, str.length() - 1);
Get number of characters of the String:
int numOfChars = str.length();

Weird behaviors

This returns 1:
public int returnOne() {
  return 1;
}
But this returns 0. This looks like it would return 1 though..
public int returnOne() {
  int count = 0;
  return count++;
}
But this returns 1.
public int returnOne() {
  int count = 0;
  return ++count;
}

Sunday, March 31, 2019

How to use Electron-vue

How to use Electron-vue

According to the github page, to install Electron-vue, run these commands:
$ npm install -g vue-cli
$ vue init simulatedgreg/electron-vue my-project
$ cd my-project
$ npm install (or yarn)
$ npm run dev (or yarn run dev)
You will be asked some questions during the installing like this:
? Application Name (my-project)
? Project description (An electron-vue project)
? Select which Vue plugins to install (Press <space> to select, <a> to toggle all, <i> to inverse selection)
❯◉ axios
 ◉ vue-electron
 ◉ vue-router
 ◉ vuex
? Use linting with ESLint? (Y/n)
? Which eslint config would you like to use? (Use arrow keys)
❯ Standard (https://github.com/feross/standard)
  AirBNB (https://github.com/airbnb/javascript)
  none (configure it yourself)
? Setup unit testing with Karma + Mocha? (Y/n)
? Setup end-to-end testing with Spectron + Mocha? (Y/n)
? What build tool would you like to use? (Use arrow keys)
❯ electron-builder (https://github.com/electron-userland/electron-builder)
  electron-packager (https://github.com/electron-userland/electron-packager)
? author (test <test@example.com>)
Now you should have the electron-vue project locally. To start the project, change the directory,
$ cd my-project
and use the following commands depending on what you want do. If you just want to run the project to see how it is like, run npm install (to install dependencies) then $ npm run dev. If errors emerge without sudo, try this: npm throws error without sudo.
# Install dependencies
npm install

# Serve with hot reload at localhost:9080
npm run dev

# Build electron application for production
npm run build

# Run unit & end-to-end tests
npm test

# Lint all JS/Vue component files in `src/`
npm run lint
After running $ npm run dev, you will see this:

And the installing is success.

Hello World

Maybe it is better to git init now if you want to use git to develop the electron-vue project. After doing so, we need to add components.
In electron-vue, there is always only one page; the page my-project/src/renderer/App.vue is always rendered. my-project/src/renderer/router/index.js assigns components to <router-view></router-view> of App.vue. And we make the components in my-project/src/renderer/components/.
We create our Hello World like this:
<template>
  <div>
    <router-link to="/">Hello World</router-link>
  </div>
</template>

<script>
  export default {
    name: 'hello-world',
    methods: {
      open (link) {
        this.$electron.shell.openExternal(link)
      }
    }
  }
</script>
and save this as my-project/src/renderer/components/HelloWorld.vue.
Then open my-project/src/renderer/router/index.js and add the component in the router.
import Vue from 'vue'
import Router from 'vue-router'

Vue.use(Router)

export default new Router({
  routes: [
    {
      path: '/',
      name: 'landing-page',
      component: require('@/components/LandingPage').default
    },

    // Add the following-----------------
    {
      path: '/hello-world',
      name: 'hello-world',
      component: require('@/components/HelloWorld').default
    },
    // ----------------------------------

    {
      path: '*',
      redirect: '/'
    }
  ]
})
And add the link to the default landing page (path: my-project/src/renderer/components/):
<template>
  <div id="wrapper">
    <img id="logo" src="~@/assets/logo.png" alt="electron-vue">
    <main>
      <div class="left-side">
        <span class="title">
          Welcome to your new project!
        </span>
        <system-information></system-information>
        <-! -------------Add the following------------- ->
        <router-link to="/hello-world">Hello World!</router-link>
        <-! ------------------------------------------- ->
And you will see the link is added. Click on it.

You will see the hello world page: