SkFlow
Introduction
In order to make the use of tensorflow simpler to experiment machine learning, google offered a library that stays on top of tensorflow. Skflow make life easier.
Import library
import tensorflow.contrib.learn as skflow
from sklearn import datasets, metrics
from sklearn import cross_validation
Load dataset
iris = datasets.load_iris()
x_train, x_test, y_train, y_test = cross_validation.train_test_split(
iris.data, iris.target, test_size=0.2, random_state=42)
# Feature columns is required for new versions
feature_columns = skflow.infer_real_valued_columns_from_input(x_train)
Linear classifier
classifier = skflow.LinearClassifier(feature_columns=feature_columns, n_classes=3,model_dir='/tmp/tf/linear/')
classifier.fit(x_train, y_train, steps=200, batch_size=32)
score = metrics.accuracy_score(y_test, classifier.predict(x_test))
print("Accuracy: %f" % score)
Accuracy: 0.966667
Multi layer perceptron
classifier = skflow.DNNClassifier(feature_columns=feature_columns, hidden_units=[10, 20, 10],
n_classes=3,model_dir='/tmp/tf/mlp/')
classifier.fit(x_train, y_train, steps=200)
score = metrics.accuracy_score(y_test, classifier.predict(x_test))
print("Accuracy: %f" % score)
Accuracy: 1.000000
Using Tensorboard
It's much easier to monitor your model with tensorboard through skflow. Just add the parameter "model_dir" to the classifier constructor.
After running this code, type on your server console:
tensorboard --logdir=/tmp/tf_examples/test/
classifier = skflow.DNNClassifier(feature_columns=feature_columns, hidden_units=[10, 20, 10], n_classes=3,model_dir='/tmp/tf_examples/test/')
classifier.fit(x_train, y_train, steps=200)
score = metrics.accuracy_score(y_test, classifier.predict(x_test))
print("Accuracy: %f" % score)
Accuracy: 1.000000
Last updated
Was this helpful?