Sunday, September 24, 2017

Prediction using trained model and parameters

Caution: This code doesn't work under CUI operation system, which means you can not run this program with virtual box + vagrant (at least without GUI environment).
Disclaimer: This code aims for sharing code examples publicly. Even if you get any loss using this code, we do not take any responsibility.

This code needs stock price data. Create a folder for the data as "csv" in your directory.

In the csv folder, put these data files. These are price data of Japanese stock market.
Download: https://github.com/shunakanishi/japanese_stockprice

Go back to the first directory from CSV folder. Create a python script as "stockprice.py".

We will prepare trained model and parameters in "model" -> "stockprice".

In the "model" folder:

In the "stockprice" folder:

If you don't have trained model and parameters, go to this post and create trained model and parameters at first.

Write the following inside "stockprice.py":
#-*- coding: utf-8 -*-
import numpy
import pandas
import matplotlib.pyplot as plt

from sklearn import preprocessing
from keras.models import Sequential
from keras.models import model_from_json
from keras.layers.core import Dense, Activation
from keras.layers.recurrent import LSTM
import keras.backend.tensorflow_backend as KTF
import os.path

class Prediction :

  def __init__(self):
    self.length_of_sequences = 10
    self.in_out_neurons = 1
    self.hidden_neurons = 300


  def load_data(self, data, n_prev=10):
    X, Y = [], []
    for i in range(len(data) - n_prev):
      X.append(data.iloc[i:(i+n_prev)].as_matrix())
      Y.append(data.iloc[i+n_prev].as_matrix())
    retX = numpy.array(X)
    retY = numpy.array(Y)
    return retX, retY


  def create_model(self, f_model, model_filename, weights_filename) :
    print(os.path.join(f_model,model_filename))
    if os.path.isfile(os.path.join(f_model,model_filename)):
      print('Saved parameters found. I will use this file...')
      json_string = open(os.path.join(f_model, model_filename)).read()
      model = model_from_json(json_string)
      model.summary()
      model.compile(loss="mape", optimizer="adam")
      model.load_weights(os.path.join(f_model,weights_filename))
    else:
      print('Saved parameters Not found. Please prepare model and parameters.')
      model = None
    return model

if __name__ == "__main__":

  f_log = './log'
  f_model = './model/stockprice'
  model_filename = 'stockprice_model.json'
  yaml_filename = 'stockprice_model.yaml'
  weights_filename = 'stockprice_model_weights.hdf5'

  prediction = Prediction()

  # Data
  data = None
  for year in range(2007, 2017):
    data_ = pandas.read_csv('csv/indices_I101_1d_' + str(year) +  '.csv')
    data = data_ if (data is None) else pandas.concat([data, data_])
  data.columns = ['date', 'open', 'high', 'low', 'close']
  data['date'] = pandas.to_datetime(data['date'], format='%Y-%m-%d')
  # Data of closing price
  data['close'] = preprocessing.scale(data['close'])
  data = data.sort_values(by='date')
  data = data.reset_index(drop=True)
  data = data.loc[:, ['date', 'close']]

  # 100% of the data is used as test data.
  # split_pos = int(len(data) * 0.8)
  # x_train, y_train = prediction.load_data(data[['close']].iloc[0:split_pos], prediction.length_of_sequences)
  x_test,  y_test  = prediction.load_data(data[['close']], prediction.length_of_sequences)

  old_session = KTF.get_session()

  model = prediction.create_model(f_model, model_filename, weights_filename)

  predicted = model.predict(x_test)
  result = pandas.DataFrame(predicted)
  result.columns = ['predict']
  result['actual'] = y_test
  result.plot()
  plt.show()

And do this command:
$ sudo python3 stockprice.py


100% of the data will be used for the test.

jQuery show, hide and toggle with effect




Name:

Age:

Sex:


The code:
<button onclick="toggle_display()">Show the input fields</button>
<br />
<br />
<br />
<div id="something">
  Name:<br />
  <input type="text" /><br />
  Age:<br />
  <input type="text" /><br />
  Sex:<br />
  <input type="text" /><br />
</div>

<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script>
   function toggle_display(){
       $("#something").toggle( "slow" );
       //or you can use codes like these:
       //$("#something").show( "slow" );
       //$("#something").hide( "slow" );
   }
</script>

Monday, September 18, 2017

Python debug

The debug of Python script is not difficult. Just add this line:
import pdb; pdb.set_trace()
in the Python script.

For example:
import sys

def log(args):
    print(args)

def main(args):
    log(args)

import pdb;pdb.set_trace()
main(sys.argv)

The execution will be stopped at the line:




Commandes of the python debugDetail
sStep in.
nNext. Step over.
rReturn. Step out.
lList. This shows the code around the current location.
aArgs.
pPrint. Use this way: p some_variable
cContinue.



Sunday, September 17, 2017

How to become a researcher

1. Read research papers. You can search the research papers from Google scholar.
(If you don't have knowledge enough to read research papers, study about the field to get essential knowledge at first.)

2. Write research papers. Generally speaking, research paper is created by criticizing previous research papers and introducing your better idea. So it's very important to read and understand current research papers.

3. Get Master degree from some university. Master degree is considered as a first step of researcher.

4. Get PhD from some university. PhD is a proof of your research ability. The more famous the university is, the better it is. If you can create good research papers, it should not be so difficult to get PhD.

5. Keep publishing your research papers on peer-reviewed journals. Peer-reviewed journals list. How many research papers you have published on the peer-reviewed journals is considered as your achievement. The more papers you publish on peer-reviewed journals, the better researcher you are.

Of course the quality of the papers are important too, but generally speaking, how many research papers you have published on the peer-reviewed journals is the simplest indicator to see how good researcher you are. (By the way, how many times your paper is cited is considered as an important factor to see the quality of a research paper.)

6. After getting PhD and publishing nice papers on journals, you would be able to be a professor at some university. Or be a professional researcher at some big company like Google. If you get some such job, now you are a nice researcher.

The point is that it is very important to read research papers. Because it will help you when you write research papers. Also, what field you will be in is important. Some fields earn better money such as Medical, computer science or Engineering. But some fields like Classical literature earn little money and have little chance to be employed by companies or schools.

Saturday, September 16, 2017

tab window

This is the tab window using jQuery, jQuery UI and jQuery.tablesorter.
The code:
<link rel="stylesheet" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/themes/base/jquery-ui.css"></link>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js"></script>
<link rel="stylesheet" type="text/css" href="https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.28.15/css/dragtable.mod.min.css">
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/jquery.tablesorter/2.28.15/js/jquery.tablesorter.min.js"></script>
<script>
    $( function() {
        $( "#tabs" ).tabs();
    });
</script>
    <div id="tabs">
  <ul>
<li><a href="#tabs-1">Tab 1</a></li>
<li><a href="#tabs-2">Tab 2</a></li>
        <li><a href="#tabs-3">Tab 3</a></li>
</ul>
     <div id="tabs-1">
            <span>tab1</span><br/>
            <span>tab1</span><br/>
            <span>tab1</span><br/>
            <span>tab1</span><br/>
            <span>tab1</span><br/>
     </div>
     <div id="tabs-2">
            <span>tab2</span><br/>
            <span>tab2</span><br/>
            <span>tab2</span><br/>
            <span>tab2</span><br/>
            <span>tab2</span><br/>
     </div>
        <div id="tabs-3">
            <span>tab3</span><br/>
            <span>tab3</span><br/>
            <span>tab3</span><br/>
            <span>tab3</span><br/>
            <span>tab3</span><br/>
     </div>
    </div>

If you want to expand the dropzone to whole body

If you want to expand the dropzone to whole body:


<head>
    <link rel="stylesheet" type="text/css" href="./script/dropzone.min.css">
</head>
<form>
    <div id="dZUpload" class="dropzone">
        <div class="dz-default dz-message"></div>
    </div>
</form>
<!-- Use google CDN for jQuery -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script src="./script/dropzone.min.js"></script>
<script>
    Dropzone.autoDiscover = false;
    $(document).ready(function () {
        $("body").dropzone({
            url: "http://192.168.1.1/upload.php", //Your URL to upload
            addRemoveLinks: true,
            previewsContainer: '#dZUpload',
            clickable: '#dZUpload',
            //If you are using cakephp 3, use this token.
            /*headers: {
                'X-CSRFToken': $('meta[name="token"]').attr('content')
            },*/
            success: function (file, response) {
                var imgName = response;
                //For animation
                file.previewElement.classList.add("dz-success");
                //To add id to the previews
                //file.previewElement.id = response.id;
                console.log("Successfully uploaded :" + imgName);
            }
        });
    });
</script>

Drag and drop by jQuery UI and Dropzone.js

Drag and drop by jQuery UI and Dropzone.js

<head>
    <link rel="stylesheet" type="text/css" href="./script/dropzone.min.css">
    <link rel="stylesheet" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/themes/base/jquery-ui.css"></link>
    <style type="text/css">
        .dropzone {border: none ! important;}
    </style>
</head>
<body ondragover="open_dialog()">
    <div id="dialog" title="Dialog" style="display:none;">
        <form>
            <div id="dZUpload" class="dropzone">
                <div class="dz-default dz-message"><span>Faites glisser et déposez ici.</span></div>
            </div>
        </form>
    </div>
</body>
<!-- Use google CDN for jQuery -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js"></script>
<script src="./script/dropzone.min.js"></script>
<script>
    Dropzone.autoDiscover = false;
    $(document).ready(function () {
        $("body").dropzone({
            url: "http://192.168.1.1/upload.php", //Your URL to upload
            addRemoveLinks: true,
            previewsContainer: '#dZUpload',
            clickable: '#dZUpload',
            //If you are using cakephp 3, you need this token.
            /*headers: {
                'X-CSRFToken': $('meta[name="token"]').attr('content')
            },*/
            success: function (file, response) {
                var imgName = response;
                //For animation
                file.previewElement.classList.add("dz-success");
                //To add id
                //file.previewElement.id = response.id;
                console.log("Successfully uploaded :" + imgName);
            }
        });
    });
    dialog = $("#dialog").dialog({
        autoOpen: false,
        //open:function(event,ui){$(".ui-dialog-titlebar-close").hide();},
        modal: true,
        draggable: false,
        closeOnEscape: true,
        width: "50%",
        show: "fade",
        hide: "fade",
        buttons:{
            "Do something":function(){
                $(this).dialog('close');
            },
            "Close":function(){
                $(this).dialog('close');
            }
        }
    });
    function open_dialog(){
        dialog.dialog('open');
    }
</script>

Sunday, September 10, 2017

Deep learning: Prediction of stock price

Caution: This code doesn't work under CUI operation system, which means you can not run this program with virtual box + vagrant (at least without GUI environment).
Disclaimer: This code aims for sharing code examples publicly. Even if you get any loss using this code, we do not take any responsibility.

Introduction


This code needs stock price data. Create a folder for the data as "csv" in your directory.

In the csv folder, put these data files. These are price data of Japanese stock market.
Download: https://github.com/shunakanishi/japanese_stockprice

Go back to the first directory from CSV folder. Create a python script as "stockprice.py".

And the code is:
#-*- coding: utf-8 -*-
import numpy
import pandas
import matplotlib.pyplot as plt

from sklearn import preprocessing
from keras.models import Sequential
from keras.models import model_from_json
from keras.layers.core import Dense, Activation
from keras.layers.recurrent import LSTM
import keras.backend.tensorflow_backend as KTF
import os.path

class Prediction :

  def __init__(self):
    self.length_of_sequences = 10
    self.in_out_neurons = 1
    self.hidden_neurons = 300


  def load_data(self, data, n_prev=10):
    X, Y = [], []
    for i in range(len(data) - n_prev):
      X.append(data.iloc[i:(i+n_prev)].as_matrix())
      Y.append(data.iloc[i+n_prev].as_matrix())
    retX = numpy.array(X)
    retY = numpy.array(Y)
    return retX, retY


  def create_model(self, f_model, model_filename, weights_filename) :
    print(os.path.join(f_model,model_filename))
    if os.path.isfile(os.path.join(f_model,model_filename)):
      print('Saved parameters found. I will use this file...')
      model = Sequential()
      model.add(LSTM(self.hidden_neurons, \
              batch_input_shape=(None, self.length_of_sequences, self.in_out_neurons), \
              return_sequences=False))
      model.add(Dense(self.in_out_neurons))
      model.add(Activation("linear"))
      model.compile(loss="mape", optimizer="adam")
      model.load_weights(os.path.join(f_model,weights_filename))
    else:
      print('Saved parameters Not found. Creating new one...')
      model = Sequential()
      model.add(LSTM(self.hidden_neurons, \
              batch_input_shape=(None, self.length_of_sequences, self.in_out_neurons), \
              return_sequences=False))
      model.add(Dense(self.in_out_neurons))
      model.add(Activation("linear"))
      model.compile(loss="mape", optimizer="adam")
    return model


  def train(self, f_model, model_filename, weights_filename, X_train, y_train) :
    model = self.create_model(f_model, model_filename, weights_filename)
    # Learn
    model.fit(X_train, y_train, batch_size=10, epochs=15)
    return model


if __name__ == "__main__":

  f_log = './log'
  f_model = './model/stockprice'
  model_filename = 'stockprice_model.json'
  yaml_filename = 'stockprice_model.yaml'
  weights_filename = 'stockprice_model_weights.hdf5'

  prediction = Prediction()

  # Data
  data = None
  for year in range(2007, 2017):
    data_ = pandas.read_csv('csv/indices_I101_1d_' + str(year) +  '.csv')
    data = data_ if (data is None) else pandas.concat([data, data_])
  data.columns = ['date', 'open', 'high', 'low', 'close']
  data['date'] = pandas.to_datetime(data['date'], format='%Y-%m-%d')
  # Data of closing price
  data['close'] = preprocessing.scale(data['close'])
  data = data.sort_values(by='date')
  data = data.reset_index(drop=True)
  data = data.loc[:, ['date', 'close']]

  # 20% of the data is used as test data.
  split_pos = int(len(data) * 0.8)
  x_train, y_train = prediction.load_data(data[['close']].iloc[0:split_pos], prediction.length_of_sequences)
  x_test,  y_test  = prediction.load_data(data[['close']].iloc[split_pos:], prediction.length_of_sequences)

  old_session = KTF.get_session()

  model = prediction.train(f_model, model_filename, weights_filename, x_train, y_train)

  predicted = model.predict(x_test)
  json_string = model.to_json()
  open(os.path.join(f_model,model_filename), 'w').write(json_string)
  yaml_string = model.to_yaml()
  open(os.path.join(f_model,yaml_filename), 'w').write(yaml_string)
  print('save weights')
  model.save_weights(os.path.join(f_model,weights_filename))
  KTF.set_session(old_session)
  result = pandas.DataFrame(predicted)
  result.columns = ['predict']
  result['actual'] = y_test
  result.plot()
  plt.show()

To save the trained model and parameters, create "model" folder and "log" folder.


And in the "model" folder, create "stockprice" folder.

And run the script:
$ sudo python3 stockprice.py


The result

And the trained model and parameters are saved in "model" -> "stockprice" folder. 


Use data from Yahoo Finance


Now we will use data extracted from Yahoo Finance. Get some data from these links:

Data of Nikkei
https://finance.yahoo.com/quote/%5EN225/history?ltr=1

Data of NY Dow
https://finance.yahoo.com/quote/%5EDJI/history?ltr=1

Data of Nasdaq
https://finance.yahoo.com/quote/%5EIXIC/history?ltr=1

And save the data as "stock.csv" in the csv folder.

Now open the "stockprice.py" file and change the inside as follows:
#-*- coding: utf-8 -*-
import numpy
import pandas
import matplotlib.pyplot as plt

from sklearn import preprocessing
from keras.models import Sequential
from keras.models import model_from_json
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.noise import AlphaDropout
from keras.layers.recurrent import LSTM
import keras.backend.tensorflow_backend as KTF
import os.path

class Prediction :

  def __init__(self):
    self.length_of_sequences = 10
    self.in_out_neurons = 1
    self.hidden_neurons = 300


  def load_data(self, data, n_prev=10):
    X, Y = [], []
    for i in range(len(data) - n_prev):
      X.append(data.iloc[i:(i+n_prev)].as_matrix())
      Y.append(data.iloc[i+n_prev].as_matrix())
    retX = numpy.array(X)
    retY = numpy.array(Y)
    return retX, retY


  def create_model(self, f_model, model_filename, weights_filename) :
    print(os.path.join(f_model,model_filename))
    if os.path.isfile(os.path.join(f_model,model_filename)):
      print('Saved parameters found. I will use this file...')
      model = Sequential()
      json_string = open(os.path.join(f_model, model_filename)).read()
      model = model_from_json(json_string)
      model.compile(loss="mape", optimizer="adam")
      model.load_weights(os.path.join(f_model,weights_filename))
    else:
      print('Saved parameters Not found. Creating new one...')
      model = Sequential()
      model.add(LSTM(self.hidden_neurons, \
              batch_input_shape=(None, self.length_of_sequences, self.in_out_neurons), \
              return_sequences=False))
      model.add(Dense(self.in_out_neurons))
      model.add(Activation("linear"))
      model.compile(loss="mape", optimizer="adam")
    return model

  def train(self, f_model, model_filename, weights_filename, X_train, y_train) :
    model = self.create_model(f_model, model_filename, weights_filename)
    # Learn
    model.fit(X_train, y_train, batch_size=10, epochs=15)
    return model


if __name__ == "__main__":

  f_log = './log'
  f_model = './model/stockprice'
  model_filename = 'stockprice_model.json'
  yaml_filename = 'stockprice_model.yaml'
  weights_filename = 'stockprice_model_weights.hdf5'

  prediction = Prediction()

  # Data
  data = None
  data_ = pandas.read_csv('csv/stock.csv')
  data = data_ if (data is None) else pandas.concat([data, data_])

  data.columns = ['Date', 'Open', 'High', 'Low', 'Close']
  data['Date'] = pandas.to_datetime(data['Date'], format='%Y-%m-%d')
  # Data of closing price
  data['Close'] = preprocessing.scale(data['Close'])
  data = data.sort_values(by='Date')
  data = data.reset_index(drop=True)
  data = data.loc[:, ['Date', 'Close']]

  # 20% of the data is used as test data.
  split_pos = int(len(data) * 0.9)
  x_train, y_train = prediction.load_data(data[['Close']].iloc[0:split_pos], prediction.length_of_sequences)
  x_test,  y_test  = prediction.load_data(data[['Close']].iloc[split_pos:], prediction.length_of_sequences)

  model = prediction.train(f_model, model_filename, weights_filename, x_train, y_train)

  predicted = model.predict(x_test)
  json_string = model.to_json()
  open(os.path.join(f_model,model_filename), 'w').write(json_string)
  yaml_string = model.to_yaml()
  open(os.path.join(f_model,yaml_filename), 'w').write(yaml_string)
  print('save weights')
  model.save_weights(os.path.join(f_model,weights_filename))
  result = pandas.DataFrame(predicted)
  result.columns = ['predict']
  result['actual'] = y_test
  result.plot()
  plt.show()

And run the script:
$ sudo python3 stockprice.py

Now the machine learning will start based on the data extracted from Yahoo finance.
(Data of Nikkei has empty cells, so use this code to remove them)
like this:
(if you are using LibreOffice, add "Option VBASupport 1" at the top)
Sub RowsDelete()
   Dim i As Long
   Dim myRow As Long
   myRow = Worksheets("sheet1").Range("A65536").End(xlUp).Row
   For i = myRow To 1 Step -1
       If Cells(i, 2).Value = "null" Then
           Cells(i, 2).EntireRow.Delete
       End If
   Next i
End Sub

Friday, September 8, 2017

How to use Dropzone.js

1. Download dropzone.js from here: http://www.dropzonejs.com/
or here: https://github.com/enyo/dropzone

2. What we need is "dropzone.min.js" and "dropzone.min.css". Move them into your directory, for example, like this "(current directory)/script".


3. Download jQuery or simply use CDN:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
Or specify where the jQuery file is downloaded:
<script src="./the/file/of/jQuery.js"></script>

4. Write as follows and save it as .html file.
<head>
    <link rel="stylesheet" type="text/css" href="./script/dropzone.min.css">
</head>
<form>
    <div id="dZUpload" class="dropzone">
        <div class="dz-default dz-message"></div>
    </div>
</form>
<!-- Use CDN of jQuery of google -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script src="./script/dropzone.min.js"></script>
<script>
    Dropzone.autoDiscover = false;
    $(document).ready(function () {
        $("#dZUpload").dropzone({
            url: "http://192.168.1.1/upload.php", //Your target URL
            addRemoveLinks: true,
            //If you use cakaphp 3, we need to write about csrf information.
            /*headers: {
                'X-CSRFToken': $('meta[name="token"]').attr('content')
            },*/
            success: function (file, response) {
                var imgName = response;
                //For the animation of success.
                file.previewElement.classList.add("dz-success");
                //To add id to the previews.
                //file.previewElement.id = response.id;
                console.log("Successfully uploaded :" + imgName);
            }
        });
    });
</script>

5. Open the .html file from a browser. We will see we can use the dropzone.
image cited from: https://www.npmjs.com/package/dropzone

But if the provided URL is not valid, you will see that error symbols appear on the previews:
In that case, Press F12 on the browser and check the errors:

If you are using PHP for the sever side, use "echo" in php function to send data back to dropzone.js as the "response" variable in the success-event function. Also use json_encode in the php function to send an object as it is to javascript.

Sunday, September 3, 2017

Text generator of writings of Nietzsche

This is the text generator of writings of Nietzsche. I customized the example code of fchollet/keras.

#!/usr/bin/env python
# -*- coding: UTF-8 -*-
'''Example script to generate text from Nietzsche's writings.
At least 20 epochs are required before the generated text
starts sounding coherent.
It is recommended to run this script on GPU, as recurrent
networks are quite computationally intensive.
If you try this script on new data, make sure your corpus
has at least ~100k characters. ~1M is better.
'''

from __future__ import print_function
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
from keras.models import model_from_json
import keras.backend.tensorflow_backend as KTF
import tensorflow as tf
import os.path
import keras
import numpy as np
import random
import sys


weights_filename = 'textgen_model_weights.hdf5'
f_model = './model/textgen'
f_log = './log/textgen'
model_yaml = 'textgen_model.yaml'
model_filename = 'textgen_model.json'

path = get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt')
text = open(path).read().lower()
print('corpus length:', len(text))

chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))

# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
    sentences.append(text[i: i + maxlen])
    next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))

print('Vectorization...')
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
    for t, char in enumerate(sentence):
        X[i, t, char_indices[char]] = 1
    y[i, char_indices[next_chars[i]]] = 1

old_session = KTF.get_session()
session = tf.Session('')
KTF.set_session(session)

if os.path.isfile(os.path.join(f_model,model_filename)):
    print('Saved parameters found. I will use this file...')
    json_string = open(os.path.join(f_model, model_filename)).read()
    model = model_from_json(json_string)
    model.summary()
    optimizer = RMSprop(lr=0.01)
    model.compile(loss='categorical_crossentropy', optimizer=optimizer)
    model.load_weights(os.path.join(f_model,weights_filename))
else:
    # build the model: a single LSTM
    print('Saved parameters Not found. Creating new model...')
    print('Build model...')
    model = Sequential()
    model.add(LSTM(128, input_shape=(maxlen, len(chars))))
    model.add(Dense(len(chars)))
    model.add(Activation('softmax'))
    optimizer = RMSprop(lr=0.01)
    model.compile(loss='categorical_crossentropy', optimizer=optimizer)


def sample(preds, temperature=1.0):
    # helper function to sample an index from a probability array
    preds = np.asarray(preds).astype('float64')
    preds = np.log(preds) / temperature
    exp_preds = np.exp(preds)
    preds = exp_preds / np.sum(exp_preds)
    probas = np.random.multinomial(1, preds, 1)
    return np.argmax(probas)

#tb_cb = keras.callbacks.TensorBoard(log_dir=f_log, histogram_freq=1)
#cp_cb = keras.callbacks.ModelCheckpoint(filepath = os.path.join(f_model,weights_filename), monitor='loss', verbose=1, save_best_only=True, mode='auto')
#cbks = [tb_cb, cp_cb]

# train the model, output generated text after each iteration
for iteration in range(1, 60):
    print()
    print('-' * 50)
    print('Iteration', iteration)
    model.fit(X, y,
              batch_size=128,
              epochs=1,)
              #callbacks=cbks)

    start_index = random.randint(0, len(text) - maxlen - 1)

    for diversity in [0.2, 0.5, 1.0, 1.2]:
        print()
        print('----- diversity:', diversity)

        generated = ''
        sentence = text[start_index: start_index + maxlen]
        generated += sentence
        print('----- Generating with seed: "' + sentence + '"')
        sys.stdout.write(generated)

        for i in range(400):
            x = np.zeros((1, maxlen, len(chars)))
            for t, char in enumerate(sentence):
                x[0, t, char_indices[char]] = 1.

            preds = model.predict(x, verbose=0)[0]
            next_index = sample(preds, diversity)
            next_char = indices_char[next_index]

            generated += next_char
            sentence = sentence[1:] + next_char

            sys.stdout.write(next_char)
            sys.stdout.flush()
        print()

json_string = model.to_json()
open(os.path.join(f_model,model_filename), 'w').write(json_string)
yaml_string = model.to_yaml()
open(os.path.join(f_model,model_yaml), 'w').write(yaml_string)
print('save weights')
model.save_weights(os.path.join(f_model,weights_filename))
KTF.set_session(old_session)

Saturday, September 2, 2017

Javescript

1. Use "debugger".


Write "debugger" during the Javascript code:
<a href="javascript:debug_test();">Click me</a>
<script>
function debug_test(){
var num = 5;
for(var i = 0; i < 10; i++){
    num++;
    debugger
}
}
</script>

And the execution will stop at the line where the "debugger" is written.

2. console.log, console.warn, console.error, console.info


Write like this:
<script>
var salut = "salut";
for(var i = 0; i < 10; i++){
     console.log(salut); //Qu'est qu'il y a dans la valeur? Vérifier ça avec le console du navigateur.
     console.warn(salut);
     console.error(salut);
     console.info(salut);
}
</script>

Press F12 and you can see these messages are shown on the console.




3. console.table

Write like this:
<script>
var salut = "salut";
var bonjour = "bon jour";
var aurevoir = "au revoir";
console.table([[salut,bonjour,aurevoir], [1,2,3]]);
</script>

And on the console of the browser:

You can see details of the object too in the same way:
<script>
var car = {type:"Fiat", model:"500", color:"white"};
console.table(car);
</script>

This code will display the following:
(But this code might not work on the Chrome's console. Use Firefox in that case.)


4. "console.trace" or "try and catch"


console.trace should not be remained in the code after development because it isn't a standard function. But, if you want to see the trace, you can use this code in the following way:
<script>
function foo() {
  function bar() {
    console.trace();
  }
  bar();
}

foo();
</script>

This code will show on the console:
bar
foo
<anonymous>

But you can use "try and catch" also:
<script>
try {
    hohoho("Bon jour.");
}
catch(err) {
    console.log(err);
}
</script>

This will show the stack trace.
"hohoho is not defined" Of course not defined.