Notes and code from Data_310 Lectures
The code is as follows
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['acc'])
The code predicted this Balinese cat to be a dog.
The code predicted this orange Tabby to be a cat.
The code predicted this Sphynx cat to be a cat.
The code predicted my 11th grade German exchange student’s dog to be a dog.
The code predicted my high school best friend’s dog to be a dog.
The code predicted this Pinterest screenshot of a bulldog to be a dog.
Below are the two histograms for both the LinearClassifier and BoostedTreesClassifier.
The two plots are similar in frame but different in individual value framework. At a closer glance, the boosted treest probability curve is more defined than those used for logistic regression.
Values like the sex, age and class contributed drastically in a negative way while the fare was the only value to contribute largely in a positive way.
In accordance to the previous graphs we’ve seen, the two feature graphs confirm how important sex and fare (which denotes class) are in predicting survival rates.