## Neural Network with keras: Remainder Problem

The problem we try to solve here is the remainder problem. We train our neural network to find the remainder of a number randomly drawn from 0 to 99 inclusive when it is divided by 17. For example, given 20, the remainder is 3.

The code (in Jupyter notebook) detailing the results of this post can be found here by the name keras_test1.ipynb. In all the tests, we use only 1 hidden layers made of 64 neurons and different input and output layers to take into account the context of the problem. With the context taken into account, we show that we can help the neural network model train better!

Test 1A and Test 1B

Note: See the corresponding sections in the Jupyter notebook.

We start with a much simpler problem. Draw a random number from 0 to 10 inclusive. We find their remainders when divided by 10, which is quite trivial. From test 1A, with 4 epochs, we see a steady improvement in prediction accuracy up to 82%. With 12 epochs in test 1B, our accuracy is approximately 100%. Good!

Test 2A and Test 2B

Now, we raise the hurdle. We draw wider range of random numbers, from 0 to 99 inclusive. To be fair we give the neural network more data points for training. We get pretty bad outcome; the trained model in test 2A suffers the problem of predicting only 1 outcome (it always predicts the remainder is 0). In test 2B, we perform the same training, but for longer epochs. The problem still occurs.

Test 3A

Now we solve the problem in test 2A and 2B by contextualizing the problem. Notice that in test 1A, 1B, 2A and 2B, there is only 1 input (i.e. 1 neuron in the input layer) which exactly corresponds to the random number whose remainder is to be computed.

Now, in this test, we convert it into 2 inputs, splitting the unit and tenth digits. For example, if the number is 64, the input to our neural network is now (6,4). If the number is 5, then it becomes (0,5). This is done using extract_digit() function. The possible “concept” that the neural network can learn is the fact that for division by 10, only the last digit matters. That is to say, if our input is (a,b) after the conversion, then only b matters.

What do we get? 100% accuracy! All is good.

Test 3B

Finally, we raise the complexity and solve our original problem. We draw from 0 to 99 inclusive, and find the remainder from division with 17. We use extract_digit() function here as well. Running it over 24 epochs, we get an accuracy of 96% (and it does look like it can be improved)!

Conclusion? First thing first, this is just a demonstration of neural network using keras. But more importantly, contextualizing the input does help!

The code for Test3B can be found in the following.

[1]

```import numpy as np
from keras.models import Sequential
from keras.layers import Dense```

[2]

```N = 100
D = 17

def simple_binarizer17(y, bin_factor=1, bin_shift=0):
out = [0+bin_shift]*17
out[y] = 1*bin_factor
return out

def extract_digit(x):
b = x%10
a = (x-b)/10
return [int(a),int(b)]

X0_train = np.random.randint(N+1,size=(256000,1))
Y_train = np.array([simple_binarizer17(x%D) for x in np.transpose(X0_train).tolist()[0]])
X0_test = np.random.randint(N+1,size=(100,1))
Y_test = np.array([simple_binarizer17(x%D) for x in np.transpose(X0_test).tolist()[0]])

X_train = np.array([extract_digit(X[0]) for X in X0_train])
X_test = np.array([extract_digit(X[0]) for X in X0_test])
for X0,X in zip(X0_train[:10],X_train[:10]):
print(X0,"->",X)```

[3]

```model = Sequential()

model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
model.fit(X_train, Y_train, epochs=24, batch_size=32)

loss_and_metrics = model.evaluate(X_test, Y_test, batch_size=10)
print("--LOSS and METRIC--")
print(loss_and_metrics)
print("--PREDICT--")
classes = model.predict(X_test, batch_size=16)```

[4]

```count = 0
correct_count = 0
for y0,y in zip(Y_test,classes):
count = count+1
correct_pred = False
if np.argmax(y0)==np.argmax(y):
correct_pred = True
correct_count = correct_count + 1
if count<20:
print(np.argmax(y0),"->",np.argmax(y), "(",correct_pred,")")
accuracy = correct_count/len(Y_test)
print("accuracy = ", accuracy)```

## Testing GPU usage of Tensorflow

Use the following code to test if tensorflow-gpu is able to utilize GPU. This is for tensorflow 1.10.0.

```import tensorflow as tf
import numpy as np

xx = np.random.normal(0,100,1200)
yy = np.random.normal(0,100,1200)

from tensorflow.python.client import device_lib

def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]

print(get_available_gpus())```

## Object Detection using Tensorflow: bee and butterfly Part V, faster

This post is a faster alternative to the following post: Object Detection using Tensorflow: bee and butterfly Part V.

In part IV, we end with completing the training of our faster R-CNN model. Since we ran 2000 training steps, the last produced model checkpoints will be model.ckpt-2000. We need to make a frozen graph out of it to be able to successfully utilize it for prediction (see the blue bolded part of the code).

Freezing the graph

Let’s go into command line cmd.exe. Remember to go into the virtual environment if you started with one, as we instructed.

```cd C:\Users\acer\Desktop\adhoc\myproject\Lib\site-packages\tensorflow\models\research
SET INPUT_TYPE=image_tensor
python object_detection/export_inference_graph.py --input_type=%INPUT_TYPE% --pipeline_config_path=%PIPELINE_CONFIG_PATH% --trained_checkpoint_prefix=%TRAINED_CKPT_PREFIX% --output_directory=%EXPORT_DIR%```

Upon successful completion, the following will be produced in the directory .

```adhoc/myproject/models/export
+ saved_model
+ variables
- saved_model.pb
+ checkpoint
+ frozen_inference_graph.pb
+ pipeline.config
+ model.ckpt.data-00000-of-00001
+ model.ckpt.index
+ model.ckpt.meta```

Notice that three ckpt files are created. We can use this for further training by replacing the 3 ckpt files from part 4.

frozen_inference_graph.pb is the file we will be using for prediction. We just need to run the following python file with suitable configuration. Create the following directory and put all the images that contain butterflies or bees which you want the algorithm to detect into the folder for_predict. In this example, we use 6 images namely “1.jpeg”, “2.jpeg”, …, “6.jpeg” as .

```adhoc/myproject/
+ ...
+ for_predict```

Finally, to perform prediction, just run the following using cmd.exe after moving into adhoc/myproject folder where we place our prediction2.py (see the script below).

`python prediction2.py`

and 1_MARKED.png, for example, will be produced in for_predict, with boxes showing the detected object, either butterfly or bee.

See the blue highlight below; most configurations that need to be done are in blue. The variable TEST_IMAGES_NAMES contains the name of the files we are going to predict. You can rename the images or just change the variable. Note that in this code, the variable filetype stores the file type of images we are predicting. For each prediction, thus, we can only perform prediction for the same type of images. Of course we can do better. Modify the script accordingly.

prediction2.py

```# from distutils.version import StrictVersion
import os, sys,tarfile, zipfile
import numpy as np
import tensorflow as tf
import six.moves.urllib as urllib
from PIL import Image
from io import StringIO
from matplotlib import pyplot as plt
from collections import defaultdict
from object_detection.utils import ops as utils_ops

import time
start_all = time.time()
# Paths settings
sys.path.append(THE_PATH)
sys.path.append(THE_PATH+"/object_detection")
filetype = '.jpeg'
TEST_IMAGE_NAMES = [str(i) for i in range(1,7)]
TEST_IMAGE_PATHS = [''.join((PATH_TO_TEST_IMAGES_DIR, '\\', x, filetype)) for x in TEST_IMAGE_NAMES]
# print("test image path = ", TEST_IMAGE_PATHS)
IMAGE_SIZE = (12, 8) # Size, in inches, of the output images.
NUM_CLASSES = 90

from utils import label_map_util
from utils import visualization_utils as vis_util
sys.path.append("..")
# MODEL_NAME = 'faster_rcnn_resnet101_pets'

start = time.time()
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')

categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
print(category_index)
end=time.time()

(im_width, im_height) = image.size
# return np.array(image.getdata()).reshape(
#     (im_height, im_width, 3)).astype(np.uint8)
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)

timesetX=[]
def run_inference_for_single_image(image, graph):
with graph.as_default():
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')

# Run inference
start0X = time.time()
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
end0X=time.time()
timesetX.append(end0X-start0X)
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
return output_dict

timeset=[]
timeset2=[]
config = tf.ConfigProto()
with tf.Session(config=config,graph=detection_graph) as sess:
for image_path, image_name in zip(TEST_IMAGE_PATHS, TEST_IMAGE_NAMES):
image = Image.open(image_path).convert('RGB') # !!
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.

start0 = time.time() # bottleneck in the main detection, 22s per img
output_dict = run_inference_for_single_image(image_np, detection_graph)
end0=time.time()
timeset.append(end0-start0)

start1 = time.time()
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
# ercx!
# each element in DETECTION BOX is [ymin, xmin, ymax, xmax]
# do consider the following
#           im_width, im_height = image.size
#       if use_normalized_coordinates:
#         (left, right, top, bottom) = (xmin * im_width, xmax * im_width,
#                                       ymin * im_height, ymax * im_height)
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
use_normalized_coordinates=True,
line_thickness=2,
min_score_thresh = 0.05)
# print("detection_boxes:")
# print(output_dict['detection_boxes'])
# print(type(output_dict['detection_boxes']),len(output_dict['detection_boxes']))
# print('detection_classes')
# print(output_dict['detection_classes'])
# print(type(output_dict['detection_classes']),len(output_dict['detection_classes']))
# print('detection_scores')
# print(output_dict['detection_scores'], len(output_dict['detection_scores']))
print('\n**************** detection_scores\n')
print(output_dict['detection_scores'][1:10])
plt.figure(figsize=IMAGE_SIZE)
# plt.imshow(image_np)
plt.imsave(''.join((PATH_TO_TEST_IMAGES_DIR, '\\',image_name,"_MARKED", filetype)), image_np)
end1=time.time()
timeset2.append(end1-start1)

print("time 1 = ", end-start)
print("time each:")
for i in range(len(timeset)):
print(" + ",timeset[i])
# print(" + ",timeset[i],":",timesetX[i], " : ",timeset2[i])
end_all=time.time()
print("time all= ", end_all-start_all)

# plt.show()```

The results should be similar to the ones in Object Detection using Tensorflow: bee and butterfly Part V. The only difference is the processing speed. I used NVIDIA GeForce GTX 1050 and the performance is as the following.

```time 1 = 2.0489230155944824
time each:
+ 18.73304057121277
+ 1.6632516384124756
+ 1.7054014205932617
+ 1.5573828220367432
+ 1.6851420402526855
+ 0.5358219146728516
time all= 34.96004343032837```

Using previous code, the speed will be ~18 seconds for each image. Better GPU can yield even faster performance. On another project, using GTX 1080 on images with 1920×1080 pixels, the time can be as fast as 0.2s per image second image onwards. Using CPU only, one example I tried yield a performance of ~4.5s per image second image onwards.

## Object Detection using Tensorflow: bee and butterfly Part V

Object Detection using Tensorflow: bee and butterflies

Tips. Instead of reading this post, read instead Object Detection using Tensorflow: bee and butterfly Part V, faster. The object detection process is performed with a much more efficient arrangement. The code below has re-run tensorflow session at each iteration of image which we want to perform object detection on. However, this costs a large time overhead. In our new code, the session is only run once. The first image will take some tiem, but the subsequent images will be processed very quickly.

In part IV, we end with completing the training of our faster R-CNN model. Since we ran 2000 training steps, the last produced model checkpoints will be model.ckpt-2000. We need to make a frozen graph out of it to be able to successfully utilize it for prediction.

Freezing the graph

Let’s go into command line cmd.exe. Remember to go into the virtual environment if you started with one, as we instructed.

```cd C:\Users\acer\Desktop\adhoc\myproject\Lib\site-packages\tensorflow\models\research
SET INPUT_TYPE=image_tensor
python object_detection/export_inference_graph.py --input_type=%INPUT_TYPE% --pipeline_config_path=%PIPELINE_CONFIG_PATH% --trained_checkpoint_prefix=%TRAINED_CKPT_PREFIX% --output_directory=%EXPORT_DIR%```

Upon successful completion, the following will be produced in the directory .

```adhoc/myproject/models/export
+ saved_model
+ variables
- saved_model.pb
+ checkpoint
+ frozen_inference_graph.pb
+ pipeline.config
+ model.ckpt.data-00000-of-00001
+ model.ckpt.index
+ model.ckpt.meta```

Notice that three ckpt files are created. We can use this for further training by replacing the 3 ckpt files from part 4.

frozen_inference_graph.pb is the file we will be using for prediction. We just need to run the following python file with suitable configuration. Create the following directory and put all the images that contain butterflies or bees which you want the algorithm to detect into the folder img_predict. I will put 4 images, 2 from the images/test and 2 completely new pictures, neither from images/test nor images/train (but also from https://www.pexels.com/). As seen in blue, these images are named predict1.png ,predict2.png, predict3.png and predict4.png.

```adhoc/myproject/
+ ...
+ img_predict```

Finally, to perform prediction, just run the following using cmd.exe after moving into adhoc/myproject folder where we place our prediction.py (see the script below).

`python prediction.py`

and predict1_MARKED.png, for example, will be produced in img_predict, with boxes showing the detected object, either butterfly or bee.

See the blue highlight below; most configurations that need to be done are in blue. The variable TEST_IMAGES_NAMES contains the name of the files we are going to predict. You can rename the images or just change the variable. Note that in this code, the variable filetype stores the file type of images we are predicting. For each prediction, thus, we can only perform prediction for the same type of images. Of course we can do better. Modify the script accordingly.

prediction.py

```# from distutils.version import StrictVersion
import os, sys,tarfile, zipfile
import numpy as np
import tensorflow as tf
import six.moves.urllib as urllib
from PIL import Image
from io import StringIO
from matplotlib import pyplot as plt
from collections import defaultdict
from object_detection.utils import ops as utils_ops

# Paths settings
sys.path.append(THE_PATH)
sys.path.append(THE_PATH+"/object_detection")

TEST_IMAGE_NAMES = ["predict1","predict2","predict3","predict4"]
filetype = '.png'
TEST_IMAGE_PATHS = [''.join((PATH_TO_TEST_IMAGES_DIR, '\\', x, filetype)) for x in TEST_IMAGE_NAMES]
# print("test image path = ", TEST_IMAGE_PATHS)
IMAGE_SIZE = (12, 8) # Size, in inches, of the output images.
NUM_CLASSES = 90

from utils import label_map_util
from utils import visualization_utils as vis_util
sys.path.append("..")
# MODEL_NAME = 'faster_rcnn_resnet101_pets'

detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')

categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
print(category_index)

(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)

def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')

# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})

# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
return output_dict

for image_path, image_name in zip(TEST_IMAGE_PATHS, TEST_IMAGE_NAMES):
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
# ercx!
# each element in DETECTION BOX is [ymin, xmin, ymax, xmax]
# do consider the following
#           im_width, im_height = image.size
#       if use_normalized_coordinates:
#         (left, right, top, bottom) = (xmin * im_width, xmax * im_width,
#                                       ymin * im_height, ymax * im_height)
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
use_normalized_coordinates=True,
line_thickness=2,
min_score_thresh = 0.4)
# print("detection_boxes:")
# print(output_dict['detection_boxes'])
# print(type(output_dict['detection_boxes']),len(output_dict['detection_boxes']))
# print('detection_classes')
# print(output_dict['detection_classes'])
# print(type(output_dict['detection_classes']),len(output_dict['detection_classes']))
# print('detection_scores')
# print(output_dict['detection_scores'], len(output_dict['detection_scores']))
print('\n**************** detection_scores\n')
print(output_dict['detection_scores'][1:10])
plt.figure(figsize=IMAGE_SIZE)
# plt.imshow(image_np)
plt.imsave(''.join((PATH_TO_TEST_IMAGES_DIR, '\\',image_name,"_MARKED", filetype)), image_np)```

Here are the outputs, pretty good I will say.

Note: the commented codes are there to assist you with the output of prediction. For example, if you would like to extract the data points of the rectangles that show the position of the butterflies or the bees, note that you can obtain it from output_dict[‘detection_boxes’]. Other information is stored in the dictionary output_dict as well.

You can play around with different models. But that’s it for now, cheers!

## Object Detection using Tensorflow: coco API for python in Windows

Object Detection using Tensorflow: bee and butterflies

We continue our tutorial from part IV. Since the instruction to set up coco API for python here is for Linux, we need to find a way to do it in Windows. First download the coco API and extract it, and you will see folder cocoapi-master. The following instruction mainly refers to the link here.

We need to follow the following steps beforehand:

1. install Microsoft Visual C++ 14.0. ***
2. Visual C++ might raise rc.exe error. Fix by add to PATH variable C:\Program Files (x86)\Windows Kits\8.1\bin\x64.
3. inside cocoapi-master/PythonAPI, edit the setup.py (see below).

Install MinGW and go into msys.exe. Move into the PythonAPI folder of the coco API just downloaded.

```cd "C:\Users\acer\Downloads\cocoapi-master\PythonAPI"
Make```

If successful, the file _mask.cp36-win_amd64 will be generated in /PythonAPI/pycocotools. Move the folder pycocotools so that we have

```adhoc\myproject\Lib\site-packages\tensorflow\models\research
+ pycocotools
- ...

The modified setup.py is shown here.

setup.py

```import sys
from distutils.core import setup
from Cython.Build import cythonize
from distutils.extension import Extension
import numpy as np

extra_compile_args = ['-Wno-cpp', '-Wno-unused-function', '-std=c99']\
if sys.platform != 'win32' else []
ext_modules = [
Extension(
language='c++',
include_dirs = [np.get_include(), '../common'],
extra_compile_args=extra_compile_args,
)
]

setup(name='pycocotools',
packages=['pycocotools'],
package_dir = {'pycocotools': 'pycocotools'},
version='2.0',
ext_modules=
cythonize(ext_modules)
)```

*** We attempted to install VC++ 14 via Visual Studio Community 2017 installer Windows 10 and had some troubles. In particular, msys.exe raised error “cannot find vcvarsall.bat”. Indeed, when we look into “C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC”, vcvarsall.bat is not installed.

Instead, we download Visual Studio Community 2015 here and install Programming Languages/Visual C++ package as shown here. Web installer is available but it is not downloading smoothly. When the downloader says there is a “A Setup Package is Missing or Damaged”, just keep clicking “Retry” till it works.

## Object Detection using Tensorflow: bee and butterfly Part IV

Object Detection using Tensorflow: bee and butterflies

We have prepared tfrecord files, which are basically just the images and annotations bundled into a format that we can feed into our tensorflow algorithm. Now we start the training.

Before proceeding, we need to use coco API for python. It is given here, though the instruction given is to set up for Linux. See here for the instruction to set it up in Windows. Once done, copy the entire folder pycocotools from inside PythonAPI into the following folder.

`C:\Users\acer\Desktop\adhoc\myproject\Lib\site-packages\tensorflow\models\research`

Some recap: in part I, we have set the label map but have not configured the model we will use.

```item {
id: 1
name: 'butterfly'
}

item {
id: 2
name: 'bee'
}```

```adhoc/myproject/models/model
+ faster_rcnn_resnet101_coco.config
+ model.ckpt.data-00000-of-00001
+ model.ckpt.index
+ model.ckpt.meta```

We will now configure the PATH_TO_BE_CONFIGURED in faster_rcnn_resnet101_coco .config inside the folder adhoc/myproject/models/model/. As the name suggest, we are using faster R-CNN, regions with convolutional neural network features by Ross Girshick et al. There are 5 PATH_TO_BE_CONFIGURED, each pointing to the corresponding files.

```train_config: {
...
# fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
...
}

# input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record-?????-of-00100"
}
# label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}

# input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record-?????-of-00010"
}
# label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
shuffle: false
}```

Let us start training! Here we are using a small number of training and evaluation steps, just to finish the training fast, of course at the expense of accuracy. The official site recommends NUM_TRAIN_STEPS=50000 and NUM_EVAL_STEPS=2000 steps in its tutorial. Go to command line, cmd.exe.

```cd "C:\Users\acer\Desktop\adhoc\myproject\Lib\site-packages\tensorflow\models\research"

echo %MODEL_DIR%
SET NUM_TRAIN_STEPS=2000
SET NUM_EVAL_STEPS=100
python object_detection/model_main.py --pipeline_config_path=%PIPELINE_CONFIG_PATH% --model_dir=%MODEL_DIR% --num_train_steps=%NUM_TRAIN_STEPS% --num_eval_steps=%NUM_EVAL_STEPS% --alsologtostderr```

Some possible errors may arise are listed at the end of this post***.

Many warnings may pop-up, but it will be fairly obvious if the training is ongoing (and not terminating due to some error). If you train your model in a laptop like mine, with only a single NVIDIA GeForce GTX 1050, you might be running out of memory as well. From task manager, my utilization profile looks like this. GPU is used in larger spikes consistently (more than 25% each spike), and CPU resource is heavily consumed.

See the next part on how to use to perform object detection after the training is completed.

Note: I notice there is some strange behaviour. Sometimes the training stops (task manager shows no resource consumption) that will proceed when I press enter at the command lines. To make sure this does not prevent us from completing the training, press enter several times in advance at the start of training.

Monitoring progress

We can monitor progress using tensorboard. Open another command line cmd.exe, enter the following

```SET MODEL_DIR="C:\Users\acer\Desktop\adhoc\myproject\models\model"
tensorboard --logdir=%MODEL_DIR%```

then open your web browser and type in the URL, given at the name of your computer port 6006. For mine, it is http://NITROGIGA:6006. About 15 minutes into the training, tensorboard shows the following:

Okay, it seems like the algorithm is detecting some yellow little parts of the flowers as a bee as well…

The following is an indication that the training is in progress.

```creating index...
index created!
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.12s).
Accumulating evaluation results...
DONE (t=0.04s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.015
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.056
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.003
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.052
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.040
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.087
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.093
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000```

and in the directory adhoc/myprojec/models/model, more checkpoints files will be created in a batch of 3: data, index and meta. For example, at the 58-th (out of the specified 2000) training steps, these are created.

```model.ckpt-58.data-00000-of-00001
model.ckpt-58.index
model.ckpt-58.meta```

Update: the training lasted about 4 hours.

*** Possible errors

If you see errors like

`(unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape`

do check .config files. When setting the PATH_TO_BE_CONFIGURED, use either double back slash \\ or a single slash /. Using a single backslash \ will give error. This is just a problem of escaping character in string.

*** Another possible error.

We successfully used tensorflow-gpu version 1.10.0. However, when trying it with version 1.11.0, our machine does not recognize the gpu.

## Object Detection using Tensorflow: bee and butterfly Part II

Object Detection using Tensorflow: bee and butterflies

Tips: do remember to activate the virtual environment if you have deactivated it. Virtual environment helps ensure that packages we download may not interfere with the system or other projects, especially when we need older version of some packages.

We continue from Part I. Let us prepare the data to feed into the algorithm for training. We will not feed the images into the algorithm directly, but will convert them into tfrecord files. Create the following directory for the preparation. I named it keropb, you can name it anything.

```adhoc/keropb
+ butterflies_and_bees
+ Butterflies
- butterflyimage1.png
- ...
+ Butterflies_canvas
- butterflyimage1.png
+ Bees
- beeimage1.png
- ...
+ Bees_canvas
- beeimage1.png
- ...
+ do_clone_to_annotate.py
+ do_convert_to_PASCALVOC.py
+ do_move_a_fraction.py

Note: the image folders and the corresponding canvas folders can be downloaded here. Also, do not worry, the last 4 python files will be provided along the way.

We store all our butterflies images in the folder Butterflies and bee images in the folder Bees. The _canvas folders are exact replicas of the corresponding folders. You can copy-paste both Butterflies and Bees folders and rename them. In the canvas folders, however, we will mark out the butterflies and the bees. In a sense, we are teaching the algorithm which objects in the pictures are butterflies, and which are bees. To mark out a butterfly, use white (255,255,255) RGB to block out the butterfly. Ok, this is easy to do, just use the good ol’ Paint program and use white color to paint over the butterfly, or use eraser. See the example below. Note that the images have exactly the same names.

Tips: if the image contains white patches, they might be wrongly detected as a butterfly too. This is bad. In that case, paint this irrelevant white patches with other obvious color, such as black.

Install the package kero and its dependecies.

```pip install kero
pip install opencv-python
pip install pandas```

Tips. Consider using clone_to_annotate_faster(). It is A LOT faster with a little trade off on the accuracy of bounding boxes on rotated images. The step by step instruction to do this can be found in Object Detection using Tensorflow: bee and butterfly Part II, faster. If you do, we can skip the following steps and skip the front part of Part III. Follow the instruction there.

Create and run the following script do_clone_to_annotate.py from adhoc/keropb i.e. in cmd.exe, cd into keropb and run the command

`python do_clone_to_annotate.py`

Tips: We have set check_missing_mode=False. It is good to set it to True first. This helps us check if each image in Butterflies have a corresponding image in Butterflies_canvas. Before processing, we want to identify missing images so that we can fix them before proceeding. If everything is fine, “ALL GREEN. No missing files.” will be printed. Then set it to False and run it again.

do_clone_to_annotate.py

```import kero.ImageProcessing.photoBox as kip

gsw=kip.GreyScaleWorkShop()
rotate_angle_set = [0,30,60,90,120,150,180] # None
annotation_name = "butterfly"
gsw.clone_to_annotate(this_folder, tag_folder,1,annotation_name,
order_name="imgBUT_",
tag_name="imgBUT_",
check_missing_mode=False,
rotate_angle_set=rotate_angle_set,
thresh=250,
significant_fraction=0.01)```

Note: set order_name and tag_name to be the same so that adhoc_functions.py need not be adjusted later. See that Bees_LOG.txt and Butterflies_LOG.txt are created also, listing how the image files are renamed.

Tips: Read ahead. We will be doing the same thing for Bees folder, so go ahead and open new cmd.exe, create a copy and name it do_clone_to_annotate2.py so that we can run the process in parallel to save time.

Tips: If annotation fails for one reason or another after ground truth image generation is complete, then make sure to set skip_ground_truth=True before rerunning the script, so that we do not waste time re-spawning the ground truth images.

This will create Butterflies_CLONE, Butterflies_GT and Butterflies_ANNOT folders.

1. The CLONE folder contains the images from Butterflies folder, but rotated to different angles as specified by the variable rotate_angle_set. This is to create more training images, so that the algorithm will learn to recognise the object even if it is tilted.
2. The GT folder contains the ground truth images, set to black and white. White patch will be (desirably) the object we point to. Note that this may not be perfect and more settings will be available as we develop the package to optimize this.
3. The ANNOT folder contains annotations, which are boxes to show where the object, butterfly or bee, is. This information is stored in txt file which contains the information in the format:
`label height width xmin ymin xmax ymax`

where label is either bee or butterfly; height and width are the width and height of the entire image. The image will be saved together with the annotation box as shown below.

Notice that we do this for the Butterflies folder. Do it for Bees folder as well. Also, I am using only about 30 images for each category bee and butterfly (you should use more). Using the above code, we perform 6x rotations on each image, by angles specified in the variable rotate_angle_set. This is so that the algorithm will be able to recognise the same object even if it appears in the different orientation. Note that at the time of writing, research on DNN is still ongoing and more robust image classification that can handle more transformations such as rotation might be available in the future. In total, then, we have about 180 images each.

To make tfrecord files that we will feed into the algorithm, we will need to convert this information further into PASCAL VOC format. Run the following script do_convert_to_PASCALVOC.py from adhoc/keropb. (See adhoc_functions.py here)

```import adhoc_functions as af

annot_filetype = ".txt"
img_filetype = ".png"
af.mass_convert_to_PASCAL_VOC_xml(annot_foldername,annot_filetype,
img_foldername,img_filetype)

annot_filetype = ".txt"
img_filetype = ".png"
af.mass_convert_to_PASCAL_VOC_xml(annot_foldername,annot_filetype,
img_foldername,img_filetype)```

A bunch of xml files, each corresponding to a butterfly or bee image, will be created in the _ANNOT folder. The format of these xml files are like this.

Good! We are ready to create tfrecords files in Part III.

## Object Detection using Tensorflow: bee and butterfly Part III

Object Detection using Tensorflow: bee and butterflies

In part II we have created a directory storing butterflies and bees images, together with all the annotations showing where in each image a butterfly or a bee is. Now we convert them into tfrecord files, i.e. convert them into the format that the tensorflow algorithm we use can read.

You are encouraged to create an adhoc script to automate this whole part as well. Our demonstrations will be semi-manual. This part follows the steps recommended here.

## Train and test split

Create the following empty folders:

```~/adhoc/keropb/butterflies_and_bees
+ Butterflies_train
+ Butterflies_test
+ Bees_train
+ Bees_test
+ ...```

From Butterflies_CLONE, copy all images to Butterflies_train. From Butterflies_ANNOT, copy all xml files to the same Butterflies_train folder. Do the corresponding steps to the Bees. Now run the script do_move_a_fraction.py in the folder adhoc/keropb (We again make use of adhoc_functions.py from here).

```import adhoc_functions as af

af.move_some_percent(src,tgt)

af.move_some_percent(src,tgt)```

What we are doing above is to move 10% of the images and annotations from the train folders to the corresponding test folders. Here we are using roughly 10% of all the images we have to test if the model we train using the rest 90% is performing well. As of now, I have 291 images of bees and butterflies for training and 31 for testing (yes, by right we should have more).

Now create the following directory.

```C:\Users\acer\Desktop\adhoc\myproject\images
+ train
+ test```

Put all files from Butterflies_train and Bees_train into images/train and all files from Butterflies_test and Bees_test into images/test.

## Conversion to tfrecords

The following step will be quite memory inefficient. Copy all files from images/train and images/test into the images folder. We will need it.

Add the following file to the directory

```adhoc/myproject
+ ...
+ xml_to_csv.py```

Note that this file, as shown here, needs to be configured. The variable image_path has to point to the train and test folders in images folder in adhoc/myproject. See the instruction in the link. Now go to the command line cmd.exe.

```cd "C:\Users\acer\Desktop\adhoc\myproject"
python xml_to_csv.py```

Both test_labels.csv and train_labels.csv will be produced in adhoc/myproject/data if the process is successful. Check that the csv files contains something like this

```filename, width, height, class, xmin, ymin, xmax, ymax
imgBEE_107.png, 524, 350, butterfly, 151, 9, 424, 224
...```

Also, add the following file to the directory

```adhoc\myproject\Lib\site-packages\tensorflow\models\research\object_detection
+ ...
+ generate_tf_records.py```

This file is also shown here, and needs to be configured similarly. In main() of the script, adjust the variable path and output_path (see green highlight in the link) to the following.

```path = os.path.join("C:\\Users\\acer\\Desktop\\adhoc\\myproject\\", 'images')

Also, edit the following function to correspond to the label map in the case you want to add more types of insects (see the orange highlight in the link).

```def class_text_to_int(row_label):
if row_label == 'butterfly':
return 1
elif row_label == 'bee':
return 2
else:
None```

Now go to the command line cmd.exe, move into directory tensorflow\models\research\object_detection using

`cd "C:\Users\acer\Desktop\adhoc\myproject\Lib\site-packages\tensorflow\models\research\object_detection"`

Create the tfrecord files using the following. Of course the paths arguments output_path and csv_input must be changed accordingly.

```python generate_tf_records.py --csv_input="C:/Users/acer/Desktop/adhoc/myproject/data/train_labels.csv" --output_path="C:/Users/acer/Desktop/adhoc/myproject/data/train.record"

Both test.record and train.record will be produced in adhoc/myproject/data.

See the next part, part IV, for training and prediction.

## Object Detection using Tensorflow: bee and butterfly Part I

Object Detection using Tensorflow: bee and butterflies

First preparation

Our objective here is to try using tensorflow object detection API on Windows machine. We will train our model to recognise butterflies and bees. See the following detection on some images that we obtain at the end of a 4-hour training.

We assume no prerequisite knowledge and will go through step by step as much as possible. We use python 3.6. Find a compatible version from the official site here . Follow the instruction and download accordingly.

We will use command line cmd.exe for no special reason, though any other command lines are okay. Try type python in the command line and see if we get into python mode. If the command is not recognizable, set the environment variables properly. In Windows, edit or add the Variable Name to PYTHONPATH and Variable Value the location python 3.6 is installed. Typically this is just C:\Python36. And then, to the variable Path, add both %PYTHONPATH% and %PYTHONPATH%\Scripts.

Let us do this in a virtual environment. We will create a folder that serves as the virtual environment within the Desktop. Create a folder adhoc in the Desktop. Copy the directory path to adhoc (in windows 10 this can simply be found on the top-left of windows explorer) using copy path. In cmd.exe, move into this directory by typing

`cd "C:\Users\acer\Desktop\adhoc"`

and the above refers to my directory path to this folder adhoc. Install a python package for virtual environment. The second line creates a virtual environment and the third line activates the it. The fourth line just moves us into myproject.

```pip install virtualenv
virtualenv myproject
myproject\Scripts\activate
cd myproject```

Install the following dependencies in order to use tensorflow. We use tensorflow that will make use of gpu. But otherwise, if we just want to test out with cpu or if your machine does not have a gpu, replace the last line with pip install tensorflow.

```pip install Cython
pip install contextlib2
pip install jupyter
pip install matplotlib
pip install pillow
pip install lxml
pip install pandas
pip install tensorflow-gpu==1.10.0```

Remark: You can type deactivate to exit the virtual environment.

Another remark: We successfully used tensorflow-gpu version 1.10.0 for this series of tutorial. However, when trying it with version 1.11.0 (latest version at the moment of writing), our machine does not recognize the gpu. Test if GPU is used using this code.

Install CUDA which is required by tensorflow as well.  We used CUDA 9.0 from here. Furthermore, we also need cudnn. Download it from here.  Unzip cudnn and copy everything into where CUDA is installed (or follow the instructions in the website). In my case, it is in “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0”.

When installed, go into python and do

`import tensorflow as tf`

if there is no problem, skip to the next section “More preparations”. We tested pip install tensorflow on Windows 10 on Virtual Box. When we import tensorflow, we get error asking us to install Microsoft Visual Studio 2015 redistributable. We downloaded it from the given link, but still error occurred. Then we try installing Microsoft Visual Studio Community version. Using Microsoft VS Installer, we installed VS community 2017 with the following packages 1. .NET desktop development and 2. Desktop development with C++ under Windows category. Once completed, tensorflow works.

More preparations

Create the following directories.

```C:\Users\acer\Desktop\adhoc\myproject
+ ...
+ data
- butterfly_bee_label_map.pbtxt
+ models
- model
- faster_rcnn_resnet101_coco.config # See next section```

The following is the label map.

butterfly_bee_label_map.pbtxt

```item {
id: 1
name: 'butterfly'
}

item {
id: 2
name: 'bee'
}```

Getting tensorflow object detection API

Download or clone the folder models from tensorflow object detection API here. Copy paste it into the tensorflow folder, so that, in my case, we will have the directory

```adhoc\myproject\Lib\site-packages\tensorflow\models
+ official
+ research
+ ...```

To prevent possible error during training later, we might need the following step (see the reference here): go to ~/tensorflow/models/research/object_detection, find model_lib.py. At around line 390, edit category_index.values() to list( category_index.values() ).

The file faster_rcnn_resnet101_coco.config is obtained from inside the models directory

`"C:\Users\acer\Desktop\adhoc\myproject\Lib\site-packages\tensorflow\models\research\object_detection\samples\configs\faster_rcnn_resnet101_coco.config"`

We need to configure all PATH_TO_BE_CONFIGURED in the config file; we will do this later. See part 4.

Also, we need to add the following to our environment variables. If you have PYTHONPATH variable and set path to include PYTHONPATH as instructed earlier, then add to PYTHONPATH the paths to research and research\slim.  In my computer, they are

```C:\Python36\Lib\site-packages\tensorflow\models\research
C:\Python36\Lib\site-packages\tensorflow\models\research\slim```

Now we want to convert protos files into python files in the directory tensorflow/models/objection_detection/protos.

Download the protocol buffer from here. The page may have been updated, so find it by clicking on the Next page in the given link till you find the older release that goes by the name protoc-3.4.0-win32. Try here. Note that we are downloading version 3.4, which is an older release, since newer versions do give problems. Move protoc.exe into some directory you like. In this tutorial I move it to

`C:\protoc-3.4\bin\protoc.exe`

Go into the command line cmd.exe, then move into the tensorflow/models/research folder using (do adjust the paths accordingly)

`cd C:\Users\acer\Desktop\adhoc\myproject\Lib\site-packages\tensorflow\models\research`

and then run

`"C:\protoc-3.4\bin\protoc.exe" object_detection/protos/*.proto --python_out=.`

Notice that now, in tensorflow/models/research/objection_detection/protos, python files (.py) have been generated from .protos files.

Still in the \tensorflow\models\research directory, to check that things are doing fine, do

`python object_detection/builders/model_builder_test.py`

You will see OK if everything is fine. We will proceed to process our butterflies and bees data in the next part, part II.

Tips: If you encounter error such as No module named ‘absl’, probably you have already exited the virtual environment, i.e. the machine no longer sees the Python installed in the virtual environment. Just make sure to go back into the virtual environment every time you are working on this project, so that all the packages are well managed. From cmd.exe you can cd to myproject, and run Scripts\activate. To deactivate, just type deactivate and click Enter.

## Deep Neural Network Regression part 2.2

DNN Regressor

The following codes can be found here in folder DNN regression, python under the name synregMVar_cont.ipynb (Jupyter notebook) and synregMVar_cont.py format.

Continuing from part 1 in this link, we load the model saved under output1 and output2 for prediction. It will perform badly since it is trained with a small number of data points. We will then train it further with more data points and perform better prediction.

```import kero.DataHandler.RandomDataFrame as RDF
import kero.DataHandler.DataTransform as dt
from kero.DataHandler.Generic import *

import numpy as np
import pandas as pd
import tensorflow as tf
import itertools
import matplotlib.pyplot as plt
import matplotlib
from sklearn.model_selection import train_test_split
from scipy.stats.stats import pearsonr
from pylab import rcParams```

In this code, make sure the number of layers and hidden units are the same as the values used in the first round of training. Likewise, make sure the activation functions are the same as well. The number of steps for training in the later part of the code can be set a lot higher.

```hiddenunit_set=[[32,16,8],[32,16,8]]
step_set= [2400, 2400] # [6400,6400] # [None, None]
activationfn_set=[tf.nn.relu, tf.nn.relu]

no_of_new_training_set=2000
new_test_size_frac = 0.5

rdf = RDF.RandomDataFrame()
####################################################
# Specify the input variables here
####################################################
FEATURES = ["first","second","third", "fourth","bool1", "bool2", "bool3", "bool4"]
output_label="output1" # !! List all the output column names
output_label2= "output2"

col1 = {"column_name": FEATURES[0], "items": list(range(4))}
col2 = {"column_name": FEATURES[1], "items": list(np.linspace(10, 20, 8))}
col3 = {"column_name": FEATURES[2], "items": list(np.linspace(-100, 100, 1250))}
col4 = {"column_name": FEATURES[3], "items": list(np.linspace(-1, 1, 224))}
col5 = {"column_name": FEATURES[4], "items": [0, 1]}
col6 = {"column_name": FEATURES[5], "items": [0, 1]}
col7 = {"column_name": FEATURES[6], "items": [0, 1]}
col8 = {"column_name": FEATURES[7], "items": [0, 1]}

LABEL = [output_label, output_label2]```

In the following code we load the training data set from part 2.1, drop all the defective data points, split it into training part (20 data points) and test part (980 data points), similar to, but not necessarily the same as, part 2.1.

```df_train = pd.read_csv(r"regressionMVartest_train.csv")
print('df train shape =',df_train.shape)
cleanD_train, crippD_train, _ = dt.data_sieve(df_train)  # cleanD, crippD, origD'
cleanD_train.get_list_from_df()
colname_set_train = df_train.columns
df_train_clean = cleanD_train.clean_df
df_train_crippled = crippD_train.crippled_df
print('df train clean shape =',df_train_clean.shape)
if df_train_crippled is not None:
print('df train crippled shape =',df_train_crippled.shape)
else:
print('df train: no defect')

# prepare
train = df_train_clean[:]
print(FEATURES," -size = ", len(FEATURES))```
```# Columns for tensorflow
feature_cols = [tf.contrib.layers.real_valued_column(k) for k in FEATURES]

# Training set and Prediction set with the features to predict
training_set = train[FEATURES]
prediction_set = train[LABEL]

# Train and Test
x_train, x_test, y_train, y_test = train_test_split(training_set[FEATURES] , prediction_set, test_size=0.98, random_state=42)
y_train = pd.DataFrame(y_train, columns = LABEL)
training_set = pd.DataFrame(x_train, columns = FEATURES).merge(y_train, left_index = True, right_index = True)

# Training for submission
training_sub = training_set[FEATURES]
# Same thing but for the test set
y_test = pd.DataFrame(y_test, columns = LABEL)
testing_set = pd.DataFrame(x_test, columns = FEATURES).merge(y_test, left_index = True, right_index = True)
print("training size = ", training_set.shape)
print("test size = ", testing_set.shape)```

Then we do pre-processing. Once done, we are ready to feed these pre-processed data into the model for prediction.

```range_second = [10,20]
range_third = [-100,100]
range_fourth = [-1,1]
# range_input_set = [range_second, range_third, range_fourth]
range_output1 = [-200,200]
range_output2 = [-600,600]
range_output_set = {'output1': range_output1, 'output2': range_output2}

conj_command_set = {FEATURES[0]: "",
FEATURES[1]: "cont_to_scale",
FEATURES[2]: "cont_to_scale",
FEATURES[3]: "cont_to_scale",
FEATURES[4]: "",
FEATURES[5]: "",
FEATURES[6]: "",
FEATURES[7]: "",
# OUTPUT
LABEL[0]: "cont_to_scale",
LABEL[1]: "cont_to_scale",
}
scale_output1 = [0,1]
scale_output2 = [0,1]
scale_output_set = {'output1': scale_output1, 'output2': scale_output2}
cont_to_scale_settings_second = {"scale": [-1, 1], "mode": "uniform", "original_scale":range_second}
cont_to_scale_settings_third = {"scale": [0, 1], "mode": "uniform", "original_scale":range_third}
cont_to_scale_settings_fourth = {"scale": [0, 1], "mode": "uniform", "original_scale":range_fourth}
cont_to_scale_settings_output1 = {"scale": scale_output1 , "mode": "uniform", "original_scale":range_output1}
cont_to_scale_settings_output2 = {"scale": scale_output2 , "mode": "uniform", "original_scale":range_output2}
conj_command_setting_set = {FEATURES[0]: None,
FEATURES[1]: cont_to_scale_settings_second,
FEATURES[2]: cont_to_scale_settings_third,
FEATURES[3]: cont_to_scale_settings_fourth,
FEATURES[4]: None,
FEATURES[5]: None,
FEATURES[6]: None,
FEATURES[7]: None,
# OUTPUT
LABEL[0]: cont_to_scale_settings_output1,
LABEL[1]: cont_to_scale_settings_output2,
}```
```# Model
tf.logging.set_verbosity(tf.logging.ERROR)
regressor_set = []
for i in range(len(LABEL)):
regressor = tf.contrib.learn.DNNRegressor(feature_columns=feature_cols,
activation_fn = activationfn_set[i], hidden_units=hiddenunit_set[i],
model_dir=LABEL[i])
regressor_set.append(regressor)

# Reset the index of training
training_set.reset_index(drop = True, inplace =True)
def input_fn(data_set, one_label, pred = False):
# one_label is the element of LABEL
if pred == False:

feature_cols = {k: tf.constant(data_set[k].values) for k in FEATURES}
labels = tf.constant(data_set[one_label].values)

return feature_cols, labels

if pred == True:
feature_cols = {k: tf.constant(data_set[k].values) for k in FEATURES}

return feature_cols```
```# Conjugate

cleanD_testing_set = dt.clean_data()
cleanD_testing_set.clean_df = testing_set
cleanD_testing_set.build_conj_dataframe(conj_command_set, conj_command_setting_set=conj_command_setting_set)

test_conj = cleanD_testing_set.clean_df_conj[:]

Then we perform the prediction on 980 data points that we just created.

```# Evaluation on the test set created by train_test_split
print("Final Loss on the testing set: ")
predictions_prev_set = []
for i in range(len(LABEL)):
ev = regressor_set[i].evaluate(input_fn=lambda: input_fn(test_conj,LABEL[i]), steps=1)
loss_score1 = ev["loss"]
print(LABEL[i],"{0:f}".format(loss_score1))
# Predictions
y = regressor_set[i].predict(input_fn=lambda: input_fn(test_conj,LABEL[i]))
predictions_prev = list(itertools.islice(y, test_conj.shape[0]))
predictions_prev_set.append(predictions_prev)
print("predictions_prev_set length = ",len(predictions_prev_set))```
```corrcoeff_set = []
predictions_set = []
reality_set = []
print("pearson correlation coefficients  =  ")

for i in range(len(LABEL)):
# print(LABEL[i]," : ",init_scale_max ,init_scale_min)
# need to inverse transform #
# since prediction is in conj form
initial_scale = [range_output_set[LABEL[i]][0],range_output_set[LABEL[i]][1]]
orig_scale = [scale_output_set[LABEL[i]][0],scale_output_set[LABEL[i]][1]]
pred_inv = dt.conj_from_cont_to_scaled(predictions_prev_set[i], scale=initial_scale, mode="uniform",original_scale=orig_scale)
#############################
predictions = pd.DataFrame(pred_inv,columns = ['Prediction'])
predictions_set = predictions_set + [pred_inv] # a list, or column

reality = testing_set[LABEL[i]].values # a list, or column
reality_set = reality_set + [reality]
corrcoeff=pearsonr(list(predictions.Prediction), list(reality))
corrcoeff_set.append(corrcoeff)
print(LABEL[i], " : ", corrcoeff)```
```matplotlib.rc('xtick', labelsize=20)
matplotlib.rc('ytick', labelsize=20)
for i in range(len(LABEL)):
fig, ax = plt.subplots()
#     plt.style.use('ggplot')
plt.scatter(predictions_set[i], reality_set[i],s=3, c='r', lw=0) # ,'ro'
plt.xlabel('Predictions', fontsize = 20)
plt.ylabel('Reality', fontsize = 20)
plt.title('Predictions x Reality on dataset Test: '+LABEL[i], fontsize = 20)

plt.plot([reality_set[i].min(), reality_set[i].max()], [reality_set[i].min(), reality_set[i].max()], 'k--', lw=2)```

As shown above, the prediction performance is poor.

## Further Training

Now, we create more data for further training.

```no_of_data_points = [no_of_new_training_set, None] # number of rows for training and testing data sets to be generated.
puncture_rate=0.001

rdf = RDF.RandomDataFrame()
####################################################
# Specify the input variables here
####################################################
FEATURES = ["first","second","third", "fourth","bool1", "bool2", "bool3", "bool4"]
# output_label= # !! List all the output column names
# output_label2='output2'
LABEL = ['output1','output2']

col1 = {"column_name": FEATURES[0], "items": list(range(4))}
col2 = {"column_name": FEATURES[1], "items": list(np.linspace(10, 20, 8))}
col3 = {"column_name": FEATURES[2], "items": list(np.linspace(-100, 100, 1250))}
col4 = {"column_name": FEATURES[3], "items": list(np.linspace(-1, 1, 224))}
col5 = {"column_name": FEATURES[4], "items": [0, 1]}
col6 = {"column_name": FEATURES[5], "items": [0, 1]}
col7 = {"column_name": FEATURES[6], "items": [0, 1]}
col8 = {"column_name": FEATURES[7], "items": [0, 1]}

rdf.initiate_random_table(no_of_data_points[0], col1, col2, col3, col4, col5, col6, col7, col8, panda=True)
# print("clean\n", rdf.clean_df)

df_temp = rdf.clean_df
listform, column_name_list = dt.dataframe_to_list(df_temp)

########################################################
# Specify the system of equations which determines
# the output variables.
########################################################
tempcol = []
tempcol2 = []
gg = listform[:]
column_name_list = list(column_name_list)

########## Specifiy the name(s) of the output variable(s) ##########
column_name_list = column_name_list + LABEL

listform = list(listform)
for i in range(len(listform[0])):
# example 0 (very easy)
#             temp = gg[0][i] + gg[1][i] + gg[2][i] + gg[3][i] + gg[4][i] + gg[5][i] + gg[6][i] + gg[7][i]
#             temp2 = gg[0][i] - gg[1][i] + gg[2][i] - gg[3][i] + gg[4][i] - gg[5][i] + gg[6][i] - gg[7][i]

# example 1
temp = gg[0][i]**2 + gg[1][i] + gg[2][i] + (gg[4][i] + gg[5][i])*gg[3][i] + gg[6][i] + gg[7][i]
temp2 = gg[0][i] - gg[1][i]**2 + gg[2][i] - gg[3][i]*(0.5*(gg[6][i] - gg[7][i])) + gg[4][i] - gg[5][i]
########################################
tempcol = tempcol + [temp]
tempcol2 = tempcol2 + [temp2]
listform = listform + [tempcol, tempcol2]
# for i in range(len(listform)):
#     print(column_name_list[i], '-', listform[i])
########################################################

listform = transpose_list(listform)
# print(listform)
# print(column_name_list)
temp_df = pd.DataFrame(listform, columns=column_name_list)
rdf.clean_df = temp_df
# print(rdf.clean_df)

rdf.crepify_table(rdf.clean_df, rate=puncture_rate)
# print("post crepfify\n", rdf.crepified_df)
rdf.crepified_df.to_csv("regressionMVartest_train_more.csv", index=False)```

We load the new training set, split them into training and test part (this time 50% each), and perform the pre-processing on the training part of the new training set.

```df_train = pd.read_csv(r"regressionMVartest_train_more.csv")
print('df train shape =',df_train.shape)
cleanD_train, crippD_train, _ = dt.data_sieve(df_train)  # cleanD, crippD, origD'
cleanD_train.get_list_from_df()
colname_set_train = df_train.columns
df_train_clean = cleanD_train.clean_df
df_train_crippled = crippD_train.crippled_df
print('df train clean shape =',df_train_clean.shape)
if df_train_crippled is not None:
print('df train crippled shape =',df_train_crippled.shape)
else:
print('df train: no defect')```
```# prepare
dftr=df_train_clean[:]
train = dftr
print(FEATURES," -size = ", len(FEATURES))```
```feature_cols = [tf.contrib.layers.real_valued_column(k) for k in FEATURES]

# Training set and Prediction set with the features to predict
training_set = train[FEATURES]
prediction_set = train[LABEL]

# Train and Test
x_train, x_test, y_train, y_test = train_test_split(training_set[FEATURES] , prediction_set, test_size=new_test_size_frac , random_state=42)
y_train = pd.DataFrame(y_train, columns = LABEL)
training_set = pd.DataFrame(x_train, columns = FEATURES).merge(y_train, left_index = True, right_index = True)

# Training for submission
training_sub = training_set[FEATURES]
# Same thing but for the test set
y_test = pd.DataFrame(y_test, columns = LABEL)
testing_set = pd.DataFrame(x_test, columns = FEATURES).merge(y_test, left_index = True, right_index = True)
print("training size = ", training_set.shape)
print("test size = ", testing_set.shape)```
```range_second = [10,20]
range_third = [-100,100]
range_fourth = [-1,1]
# range_input_set = [range_second, range_third, range_fourth]
range_output1 = [-200,200]
range_output2 = [-600,600]
range_output_set = {'output1': range_output1, 'output2': range_output2}

conj_command_set = {FEATURES[0]: "",
FEATURES[1]: "cont_to_scale",
FEATURES[2]: "cont_to_scale",
FEATURES[3]: "cont_to_scale",
FEATURES[4]: "",
FEATURES[5]: "",
FEATURES[6]: "",
FEATURES[7]: "",
# OUTPUT
LABEL[0]: "cont_to_scale",
LABEL[1]: "cont_to_scale",
}
scale_output1 = [0,1]
scale_output2 = [0,1]
scale_output_set = {'output1': scale_output1, 'output2': scale_output2}
cont_to_scale_settings_second = {"scale": [-1, 1], "mode": "uniform", "original_scale":range_second}
cont_to_scale_settings_third = {"scale": [0, 1], "mode": "uniform", "original_scale":range_third}
cont_to_scale_settings_fourth = {"scale": [0, 1], "mode": "uniform", "original_scale":range_fourth}
cont_to_scale_settings_output1 = {"scale": scale_output1 , "mode": "uniform", "original_scale":range_output1}
cont_to_scale_settings_output2 = {"scale": scale_output2 , "mode": "uniform", "original_scale":range_output2}
conj_command_setting_set = {FEATURES[0]: None,
FEATURES[1]: cont_to_scale_settings_second,
FEATURES[2]: cont_to_scale_settings_third,
FEATURES[3]: cont_to_scale_settings_fourth,
FEATURES[4]: None,
FEATURES[5]: None,
FEATURES[6]: None,
FEATURES[7]: None,
# OUTPUT
LABEL[0]: cont_to_scale_settings_output1,
LABEL[1]: cont_to_scale_settings_output2,
}
cleanD_training_set = dt.clean_data()
cleanD_training_set.clean_df = training_set
cleanD_training_set.build_conj_dataframe(conj_command_set, conj_command_setting_set=conj_command_setting_set)

train_conj = cleanD_training_set.clean_df_conj[:]

We write the model here, perform training, perform pre-processing on the test part of the training set, and predict the outcome of the test part of the training set.

```# Model
tf.logging.set_verbosity(tf.logging.ERROR)
regressor_set = []
for i in range(len(LABEL)):
regressor = tf.contrib.learn.DNNRegressor(feature_columns=feature_cols,
activation_fn = activationfn_set[i], hidden_units=hiddenunit_set[i],
model_dir=LABEL[i])
regressor_set.append(regressor)

# Reset the index of training
training_set.reset_index(drop = True, inplace =True)
def input_fn(data_set, one_label, pred = False):
# one_label is the element of LABEL
if pred == False:

feature_cols = {k: tf.constant(data_set[k].values) for k in FEATURES}
labels = tf.constant(data_set[one_label].values)

return feature_cols, labels

if pred == True:
feature_cols = {k: tf.constant(data_set[k].values) for k in FEATURES}

return feature_cols

# TRAINING HERE
for i in range(len(LABEL)):
regressor_set[i].fit(input_fn=lambda: input_fn(train_conj, LABEL[i]), steps=step_set[i])```
```# Conjugate testing part of the training set

cleanD_testing_set = dt.clean_data()
cleanD_testing_set.clean_df = testing_set
cleanD_testing_set.build_conj_dataframe(conj_command_set, conj_command_setting_set=conj_command_setting_set)

test_conj = cleanD_testing_set.clean_df_conj[:]
```# Evaluation on the test set created by train_test_split
print("Final Loss on the testing set: ")
predictions_prev_set_new = []
for i in range(len(LABEL)):
ev = regressor_set[i].evaluate(input_fn=lambda: input_fn(test_conj,LABEL[i]), steps=1)
loss_score1 = ev["loss"]
print(LABEL[i],"{0:f}".format(loss_score1))
# Predictions
y = regressor_set[i].predict(input_fn=lambda: input_fn(test_conj,LABEL[i]))
predictions_prev = list(itertools.islice(y, test_conj.shape[0]))
predictions_prev_set_new.append(predictions_prev)
print("predictions_prev_set_new length = ",len(predictions_prev_set))```
```corrcoeff_set_new = []
predictions_set_new = []
reality_set_new = []
print("pearson correlation coefficients  =  ")

for i in range(len(LABEL)):
# print(LABEL[i]," : ",init_scale_max ,init_scale_min)
# need to inverse transform #
# since prediction is in conj form
initial_scale = [range_output_set[LABEL[i]][0],range_output_set[LABEL[i]][1]]
orig_scale = [scale_output_set[LABEL[i]][0],scale_output_set[LABEL[i]][1]]
pred_inv = dt.conj_from_cont_to_scaled(predictions_prev_set_new[i], scale=initial_scale, mode="uniform",original_scale=orig_scale)
#############################
predictions = pd.DataFrame(pred_inv,columns = ['Prediction'])
predictions_set_new = predictions_set_new + [pred_inv] # a list, or column

reality = testing_set[LABEL[i]].values # a list, or column
reality_set_new = reality_set_new + [reality]
corrcoeff=pearsonr(list(predictions.Prediction), list(reality))
corrcoeff_set.append(corrcoeff)
print(LABEL[i], " : ", corrcoeff)```
```matplotlib.rc('xtick', labelsize=20)
matplotlib.rc('ytick', labelsize=20)

for i in range(len(LABEL)):
fig2, ax2 = plt.subplots()
#     plt.style.use('ggplot')
plt.scatter(predictions_set_new[i], reality_set_new[i],s=3, c='r', lw=0) # ,'ro'
plt.xlabel('Predictions', fontsize = 20)
plt.ylabel('Reality', fontsize = 20)
plt.title('Predictions x Reality on dataset Test: '+LABEL[i], fontsize = 20)

ax2.plot([reality_set_new[i].min(), reality_set_new[i].max()], [reality_set_new[i].min(), reality_set_new[i].max()], 'k--', lw=2)```

As shown above, the outcome of the new training shows up better, i.e. closer to the true theoretical values.

Finally, we use the newly trained model to predict the outcome of our test data. First we do pre-processing.

```df_test = pd.read_csv(r"regressionMVartest_test.csv")
# print('df test shape =',df_test.shape)
cleanD_test, crippD_test, _ = dt.data_sieve(df_test)
cleanD_test.get_list_from_df()
colname_set_train = df_train.columns
df_test_clean = cleanD_test.clean_df
df_test_crippled = crippD_test.crippled_df

conj_command_set_test = {FEATURES[0]: "",
FEATURES[1]: "cont_to_scale",
FEATURES[2]: "cont_to_scale",
FEATURES[3]: "cont_to_scale",
FEATURES[4]: "",
FEATURES[5]: "",
FEATURES[6]: "",
FEATURES[7]: "",
}
conj_command_setting_set_test = {FEATURES[0]: None,
FEATURES[1]: cont_to_scale_settings_second,
FEATURES[2]: cont_to_scale_settings_third,
FEATURES[3]: cont_to_scale_settings_fourth,
FEATURES[4]: None,
FEATURES[5]: None,
FEATURES[6]: None,
FEATURES[7]: None,
}
# Same thing (preprocessing) but for the test set

cleanD_test.build_conj_dataframe(conj_command_set_test, conj_command_setting_set=conj_command_setting_set_test)

test_predict_conj = cleanD_test.clean_df_conj[:]
print(df_test.shape)
print(test_predict_conj.shape)```

Next we print out the prediction on a separate file synregMVar_submission.csv. The result is compared with the true solutions recorded in regressionMVartest_test_correctans.csv generated in part 1.

```filename = "synregMVar_submission.csv"

y_predict_inv_set = []
for i in range(len(LABEL)):
y_predict = regressor_set[i].predict(input_fn=lambda: input_fn(test_predict_conj , LABEL[i], pred = True))
# need to transform back
y_predict_before = list(itertools.islice(y_predict, df_test.shape[0]))
# !!
initial_scale = [range_output_set[LABEL[i]][0],range_output_set[LABEL[i]][1]]
orig_scale = [scale_output_set[LABEL[i]][0],scale_output_set[LABEL[i]][1]]
y_predict_inv = dt.conj_from_cont_to_scaled(y_predict_before, scale=initial_scale, mode="uniform",original_scale=orig_scale)
y_predict_inv_set = y_predict_inv_set + [y_predict_inv]

fig2, ax2 = plt.subplots()
real_test = np.array(list(df_test_correct_ans[LABEL[i]]))
plt.scatter(y_predict_inv, real_test,s=3, c='r', lw=0) # ,'ro'
plt.xlabel('Predictions', fontsize = 20)
plt.ylabel('Reality', fontsize = 20)
plt.title('Predictions x Reality on dataset Test: '+LABEL[i], fontsize = 20)

ax2.plot([real_test.min(), real_test.max()], [real_test.min(), real_test.max()], 'k--', lw=4)
y_predict_inv_set = transpose_list(y_predict_inv_set)
# print(y_predict_inv_set)
y_predict_for_csv = pd.DataFrame(y_predict_inv_set, columns = LABEL)
y_predict_for_csv.to_csv(filename, index=False)```

The prediction is stored in synregMVar_submission.csv.