Dowemo
0 0 0 0

Reference https://github. com/tensorflow/models/tree/master/slim.

Image classification using tensorflow thin

Prepare.

Install tensorflow

Reference https://www. Tensorflow. org/install/.

If you install tensorflow with gpu support under ubuntu, the python 2.7 version

wget https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.0-cp27-none-linux_x86_64.whl
pip install tensorflow_gpu-1.2.0-cp27-none-linux_x86_64.whl

Download tf slim image warehouse

cd$WORKSPACEgit clone https://github.com/tensorflow/models/

Prepare data

There are a number of public data sets, and here's an example of the Flowers provided by the officer.

The web site provides code to download and convert data to understand code and use your own data here, which refer to the official provided code.

cd$WORKSPACE/data
wget http://download.tensorflow.org/example_images/flower_photos.tgz
tar zxf flower_photos.tgz

The dataset folder is organized as follows:

flower_photos
├── daisy
│ ├── 100080576_f52e8ee070_n.jpg
│ └── ...├── dandelion
├── LICENSE.txt
├── roses
├── sunflowers
└── tulips

Because our own data set isn't necessarily in a different folder by category, we generate list.txt to represent the relationship between the image path and the label.

Python code:

import os
class_names_to_ids = {'daisy': 0, 'dandelion': 1, 'roses': 2, 'sunflowers': 3, 'tulips': 4}
data_dir = 'flower_photos/'output_path = 'list.txt'fd = open(output_path, 'w')for class_name in class_names_to_ids.keys():
 images_list = os.listdir(data_dir + class_name)
 for image_name in images_list:
 fd.write('{}/{} {}n'.format(class_name, image_name, class_names_to_ids[class_name]))
fd.close()

In order to easily view label labels, you can also define labels.txt:

daisy
dandelion
roses
sunflowers
tulips

Randomly generate training sets and validation sets:

Python code:

import random
_NUM_VALIDATION = 350_RANDOM_SEED = 0list_path = 'list.txt'train_list_path = 'list_train.txt'val_list_path = 'list_val.txt'fd = open(list_path)
lines = fd.readlines()
fd.close()
random.seed(_RANDOM_SEED)
random.shuffle(lines)
fd = open(train_list_path, 'w')for line in lines[_NUM_VALIDATION:]:
 fd.write(line)
fd.close()
fd = open(val_list_path, 'w')for line in lines[:_NUM_VALIDATION]:
 fd.write(line)
fd.close()

Generate tfrecord data:

Python code:

import sys
sys.path.insert(0, '../models/slim/')from datasets import dataset_utilsimport mathimport osimport tensorflow as tfdefconvert_dataset(list_path, data_dir, output_dir, _NUM_SHARDS=5): fd = open(list_path)
 lines = [line.split() for line in fd]
 fd.close()
 num_per_shard = int(math.ceil(len(lines)/float(_NUM_SHARDS)))
 with tf.Graph().as_default():
 decode_jpeg_data = tf.placeholder(dtype=tf.string)
 decode_jpeg = tf.image.decode_jpeg(decode_jpeg_data, channels=3)
 with tf.Session('') as sess:
 for shard_id in range(_NUM_SHARDS):
 output_path = os.path.join(output_dir,
 'data_{:05}-of-{:05}.tfrecord'.format(shard_id, _NUM_SHARDS))
 tfrecord_writer = tf.python_io.TFRecordWriter(output_path)
 start_ndx = shard_id * num_per_shard
 end_ndx = min((shard_id + 1) * num_per_shard, len(lines))
 for i in range(start_ndx, end_ndx):
 sys.stdout.write('r>> Converting image {}/{} shard {}'.format(
 i + 1, len(lines), shard_id))
 sys.stdout.flush()
 image_data = tf.gfile.FastGFile(os.path.join(data_dir, lines[i][0]), 'rb').read()
 image = sess.run(decode_jpeg, feed_dict={decode_jpeg_data: image_data})
 height, width = image.shape[0], image.shape[1]
 example = dataset_utils.image_to_tfexample(
 image_data, b'jpg', height, width, int(lines[i][1]))
 tfrecord_writer.write(example.SerializeToString())
 tfrecord_writer.close()
 sys.stdout.write('n')
 sys.stdout.flush()
os.system('mkdir -p train')
convert_dataset('list_train.txt', 'flower_photos', 'train/')
os.system('mkdir -p val')
convert_dataset('list_val.txt', 'flower_photos', 'val/')

The resulting folder structure is as follows:

data
├── flower_photos
├── labels.txt
├── list_train.txt
├── list.txt
├── list_val.txt
├── train
│ ├── data_00000-of-00005.tfrecord
│ ├── ...│ └── data_00004-of-00005.tfrecord
└── val
 ├── data_00000-of-00005.tfrecord
 ├── ... └── data_00004-of-00005.tfrecord

( optional ) download model

Official provides a number of models that take Inception-ResNet-v2 as examples.

cd$WORKSPACE/checkpoints
wget http://download.tensorflow.org/models/inception_resnet_v2_2016_08_30.tar.gz
tar zxf inception_resnet_v2_2016_08_30.tar.gz

Training

Read data

Official provides code models/slim/datasets/flowers.py that reads Flowers data sets, also referred to as reference and modifications to the generic data set defined above.

Write the following code models/slim/datasets/dataset_classification.py .

import osimport tensorflow as tf
slim = tf.contrib.slimdefget_dataset(dataset_dir, num_samples, num_classes, labels_to_names_path=None, file_pattern='*.tfrecord'): file_pattern = os.path.join(dataset_dir, file_pattern)
 keys_to_features = {
 'image/encoded': tf.FixedLenFeature((), tf.string, default_value=''),
 'image/format': tf.FixedLenFeature((), tf.string, default_value='png'),
 'image/class/label': tf.FixedLenFeature(
 [], tf.int64, default_value=tf.zeros([], dtype=tf.int64)),
 }
 items_to_handlers = {
 'image': slim.tfexample_decoder.Image(),
 'label': slim.tfexample_decoder.Tensor('image/class/label'),
 }
 decoder = slim.tfexample_decoder.TFExampleDecoder(keys_to_features, items_to_handlers)
 items_to_descriptions = {
 'image': 'A color image of varying size.',
 'label': 'A single integer between 0 and ' + str(num_classes - 1),
 }
 labels_to_names = Noneif labels_to_names_path isnotNone:
 fd = open(labels_to_names_path)
 labels_to_names = {i : line.strip() for i, line in enumerate(fd)}
 fd.close()
 return slim.dataset.Dataset(
 data_sources=file_pattern,
 reader=tf.TFRecordReader,
 decoder=decoder,
 num_samples=num_samples,
 items_to_descriptions=items_to_descriptions,
 num_classes=num_classes,
 labels_to_names=labels_to_names)

Build model

Official provides many models in models/slim/nets/.

If you need a custom model, refer to the official provided model and place it in the corresponding folder.

Start training

The official provides training scripts, if you use official data reading and processing, you can start training using the following ways.

cd$WORKSPACE/models/slim
CUDA_VISIBLE_DEVICES="0" python train_image_classifier.py 
 --train_dir=train_logs 
 --dataset_name=flowers 
 --dataset_split_name=train 
 --dataset_dir=../../data/flowers 
 --model_name=inception_resnet_v2 
 --checkpoint_path=../../checkpoints/inception_resnet_v2_2016_08_30.ckpt 
 --checkpoint_exclude_scopes=InceptionResnetV2/Logits,InceptionResnetV2/AuxLogits 
 --trainable_scopes=InceptionResnetV2/Logits,InceptionResnetV2/AuxLogits 
 --max_number_of_steps=1000 
 --batch_size=32 
 --learning_rate=0.01 
 --learning_rate_decay_type=fixed 
 --save_interval_secs=60 
 --save_summaries_secs=60 
 --log_every_n_steps=10 
 --optimizer=rmsprop 
 --weight_decay=0.00004

No fine tune, --checkpoint_path, --checkpoint_exclude_scopes and --trainable_scopes.

Fine tune all layers, remove --checkpoint_exclude_scopes and --trainable_scopes.

If only the cpu is used, --clone_on_cpu=True is added.

Other parameters can be removed by default or modified.

Use your own data to modify it models/slim/train_image_classifier.py :

Put it.

from datasets import dataset_factory

Change to

from datasets import dataset_classification

Put it.

dataset = dataset_factory.get_dataset(
 FLAGS.dataset_name, FLAGS.dataset_split_name, FLAGS.dataset_dir)

Change to

dataset = dataset_classification.get_dataset(
 FLAGS.dataset_dir, FLAGS.num_samples, FLAGS.num_classes, FLAGS.labels_to_names_path)

In

tf.app.flags.DEFINE_string(
 'dataset_dir', None, 'The directory where the dataset files are stored.')

Join after

tf.app.flags.DEFINE_integer(
 'num_samples', 3320, 'Number of samples.')
tf.app.flags.DEFINE_integer(
 'num_classes', 5, 'Number of classes.')
tf.app.flags.DEFINE_string(
 'labels_to_names_path', None, 'Label names file path.')

Execute the following command when training:

cd$WORKSPACE/models/slim
python train_image_classifier.py 
 --train_dir=train_logs 
 --dataset_dir=../../data/train 
 --num_samples=3320 
 --num_classes=5 
 --labels_to_names_path=../../data/labels.txt 
 --model_name=inception_resnet_v2 
 --checkpoint_path=../../checkpoints/inception_resnet_v2_2016_08_30.ckpt 
 --checkpoint_exclude_scopes=InceptionResnetV2/Logits,InceptionResnetV2/AuxLogits 
 --trainable_scopes=InceptionResnetV2/Logits,InceptionResnetV2/AuxLogits

Visual log

You can see the loss trend while you can train the log on one side.

tensorboard --logdir train_logs/

Verify

Official provides validation scripts.

python eval_image_classifier.py 
 --checkpoint_path=train_logs 
 --eval_dir=eval_logs 
 --dataset_name=flowers 
 --dataset_split_name=validation 
 --dataset_dir=../../data/flowers 
 --model_name=inception_resnet_v2

Similarly, if you're using your own dataset, you need to modify it models/slim/eval_image_classifier.py :

Put it.

from datasets import dataset_factory

Change to

from datasets import dataset_classification

Put it.

dataset = dataset_factory.get_dataset(
 FLAGS.dataset_name, FLAGS.dataset_split_name, FLAGS.dataset_dir)

Change to

dataset = dataset_classification.get_dataset(
 FLAGS.dataset_dir, FLAGS.num_samples, FLAGS.num_classes, FLAGS.labels_to_names_path)

In

tf.app.flags.DEFINE_string(
 'dataset_dir', None, 'The directory where the dataset files are stored.')

Join after

tf.app.flags.DEFINE_integer(
 'num_samples', 350, 'Number of samples.')
tf.app.flags.DEFINE_integer(
 'num_classes', 5, 'Number of classes.')
tf.app.flags.DEFINE_string(
 'labels_to_names_path', None, 'Label names file path.')

Execute the following command when validation is performed:

python eval_image_classifier.py 
 --checkpoint_path=train_logs 
 --eval_dir=eval_logs 
 --dataset_dir=../../data/val 
 --num_samples=350 
 --num_classes=5 
 --model_name=inception_resnet_v2

You can verify one side by side, and note that you use other gpu or reasonably allocate memory.

You can also visualize log, and if you've already seen a log, you're recommended to use other ports, such as:

tensorboard --logdir eval_logs/--port 6007

Test

Reference models/slim/eval_image_classifier.py A script that can be written to read a picture using a model models/slim/test_image_classifier.py

from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionimport osimport mathimport tensorflow as tffrom nets import nets_factoryfrom preprocessing import preprocessing_factory
slim = tf.contrib.slim
tf.app.flags.DEFINE_string(
 'master', '', 'The address of the TensorFlow master to use.')
tf.app.flags.DEFINE_string(
 'checkpoint_path', '/tmp/tfmodel/',
 'The directory where the model was written to or an absolute path to a ''checkpoint file.')
tf.app.flags.DEFINE_string(
 'test_path', '', 'Test image path.')
tf.app.flags.DEFINE_integer(
 'num_classes', 5, 'Number of classes.')
tf.app.flags.DEFINE_integer(
 'labels_offset', 0,
 'An offset for the labels in the dataset. This flag is primarily used to ''evaluate the VGG and ResNet architectures which do not use a background ''class for the ImageNet dataset.')
tf.app.flags.DEFINE_string(
 'model_name', 'inception_v3', 'The name of the architecture to evaluate.')
tf.app.flags.DEFINE_string(
 'preprocessing_name', None, 'The name of the preprocessing to use. If left ''as `None`, then the model_name flag is used.')
tf.app.flags.DEFINE_integer(
 'test_image_size', None, 'Eval image size')
FLAGS = tf.app.flags.FLAGSdefmain(_):ifnot FLAGS.test_list:
 raise ValueError('You must supply the test list with --test_list')
 tf.logging.set_verbosity(tf.logging.INFO)
 with tf.Graph().as_default():
 tf_global_step = slim.get_or_create_global_step()
 ##################### Select the model ##################### network_fn = nets_factory.get_network_fn(
 FLAGS.model_name,
 num_classes=(FLAGS.num_classes - FLAGS.labels_offset),
 is_training=False)
 ###################################### Select the preprocessing function ###################################### preprocessing_name = FLAGS.preprocessing_name or FLAGS.model_name
 image_preprocessing_fn = preprocessing_factory.get_preprocessing(
 preprocessing_name,
 is_training=False)
 test_image_size = FLAGS.test_image_size or network_fn.default_image_size
 if tf.gfile.IsDirectory(FLAGS.checkpoint_path):
 checkpoint_path = tf.train.latest_checkpoint(FLAGS.checkpoint_path)
 else:
 checkpoint_path = FLAGS.checkpoint_path
 tf.Graph().as_default()
 with tf.Session() as sess:
 image = open(FLAGS.test_path, 'rb').read()
 image = tf.image.decode_jpeg(image, channels=3)
 processed_image = image_preprocessing_fn(image, test_image_size, test_image_size)
 processed_images = tf.expand_dims(processed_image, 0)
 logits, _ = network_fn(processed_images)
 predictions = tf.argmax(logits, 1)
 saver = tf.train.Saver()
 saver.restore(sess, checkpoint_path)
 np_image, network_input, predictions = sess.run([image, processed_image, predictions])
 print('{} {}'.format(FLAGS.test_path, predictions[0]))if __name__ == '__main__':
 tf.app.run()

Execute the following command when you test:

python test_image_classifier.py 
 --checkpoint_path=train_logs/
 --test_path=../../data/flower_photos/tulips/6948239566_0ac0a124ee_n.jpg 
 --num_classes=5 
 --model_name=inception_resnet_v2



Copyright © 2011 Dowemo All rights reserved.    Creative Commons   AboutUs