GDL_code/03_03_vae_digits_train.ipynb に基づいて、学習を進める。

[自分へのメモ] デフォルトでは、ipython 起動中に自作モジュールの修正をしても反映されないので、 GDL_code/03_03_vae_digits_train.ipynb の最初の部分で次のように記述している。

%load_ext autoreload
%autoreload 2
  • '%autoreload 2' は、 ’import func'したモジュールを func()を実行する度にリロードする。
  • '%autoreload 1' は 、'aimport func' したモジュールを func() を実行する度にリロードする。
  • '%autoreload' は、'import func'したモジュールを '%autoreload' する度にリロードする。
In [1]:
%load_ext autoreload
%autoreload 2

3.3 変分展覧会

画像とともにエンコードされるベクタを確信度とともに与える。また、ある点(原点)からの距離が小さくなるようにする。

3.4 変分オートエンコーダの作成

3.2章で作成したオートエンコーダの、エンコーダと損失関数を変更すればよい。

3.4.1 エンコーダ

3.2章のエンコーダは、各画像を、潜在空間の1点に直接写像していた。 変分エンコーダでは、各画像を、潜在空間内のある点の回りに、多変量正規分布となるように写像する。

変分オートエンコーダでは、潜在空間のどの次元間にも相関がないとみなすので、共分散行列は対角行列になる。 これにより、エンコーダは各入力を平均ベクトルと分散ベクトルに写像すればよく、次元間の相関を気にする必要はない。 さらに、分散の対数に写像すると $(-\infty, \infty)$ の範囲のどのような実数でもとれる。

正規分布

平均(mean) $\mu$ , 分散(variance) $\sigma^2$, 標準偏差(standard deviatioin) $\sigma$ として1次元の正規分布の確率密度関数

$\displaystyle f(x | \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{(x-\mu)^2}{2 \sigma^2}}$

以下の式を使って、点 $z$ をサンプリングする。

$z = \mu + \sigma \epsilon$

$x = e^{\log x}$ より $\displaystyle \sigma = e^{\frac{\log \sigma^2}{2}}$ がいえる。 したがって log_var を各次元の分散の対数として sigma = exp(log_var / 2) で計算できる。 この sigma を使って z = mu + sigma * epsilon

epsilon は、標準正規分布からサンプリングされた点である。 mu は目印をどこに置くかの Mr. N. Coderの意見、sigmaはその確信度、epsilon はどのくらい離れておくかをランダムに選択した結果に相当する。

前の例では潜在空間が連続であることは求められていなかったが、今回は mu の周辺領域の点も似た画像になるような制約が加えられている。

新しい変分エンコーダの特徴は次の通り。

  • Flatten 層を直接潜在空間の層に接続するのではなく mu と log_var の層に接続する。
  • Sampling 層は、mu と log_var で定義される正規分布から潜在空間内の点をサンプリングする。
  • encoderのモデルの出力は mu, log_var, z の3種類となる。
In [2]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np

x = np.linspace(-5, 5, 200)

def f(x, m, v):
    d = x-m
    return np.exp(- d*d / (2.0 * v)) / np.sqrt(2 * np.pi * v)

fig, ax = plt.subplots(1,1,figsize=(8,6))

ax.plot(x, f(x, 0.0, 0.2),label='str mean=0.0, variance=0.2',color='blue')
ax.plot(x, f(x, 0.0, 1.0),label='str mean=0.0, variance=1.0',color='red')
ax.plot(x, f(x, 0.0, 5.0),label='str mean=0.0, variance=5.0',color='orange')
ax.plot(x, f(x, -2.0, 0.5),label='str mean=-2.0, variance=0.5',color='green')

plt.legend()
plt.show()
    

ライブラリ

In [3]:
# DGL_code/utils/loaders.py
# gdl_ch03_01 と同じ
from tensorflow.keras.datasets import mnist

def load_mnist():
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    x_train = x_train.astype('float32') / 255.0
    x_train = x_train.reshape(x_train.shape + (1,))
    x_test = x_test.astype('float32') / 255.0
    x_test = x_test.reshape(x_test.shape + (1,))
    return (x_train, y_train), (x_test, y_test)
In [4]:
# DGL_code/utils/loaders.py
import os
import pickle

def load_model(model_class, folder):
    with open(os.path.join(folder, 'params.pkl'), 'rb') as f:
        params = pickle.load(f)
    model = model_class(*params)
    model.load_weights(os.path.join(folder, 'weights/weights.h5'))
    return model 
In [5]:
# GDL_code/utils/callbacks.py
# gdl_ch03_01 と同じ

import os
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.callbacks import Callback, LearningRateScheduler

class CustomCallback(Callback):
    def __init__(self, run_folder, print_every_n_batches, initial_epoch, vae):
        self.run_folder = run_folder
        self.print_every_n_batches = print_every_n_batches
        self.epoch = initial_epoch
        self.vae = vae
        
        
    def on_train_batch_end(self, batch, logs={}):
        if batch % self.print_every_n_batches == 0:
            z_new = np.random.normal(size=(1,self.vae.z_dim))
            reconst = self.vae.decoder.predict(np.array(z_new))[0].squeeze()
            
            filepath = os.path.join(self.run_folder, 'images', 'img_'+str(self.epoch).zfill(3)+'_'+str(batch)+'.jpg')
            if len(reconst.shape) == 2:
                plt.imsave(filepath, reconst, cmap='gray_r')
            else:
                plt.imsave(filepath, reconst)
        
        
    def on_epoch_begin(self, epoch, logs={}):
        self.epoch += 1
        
        
def step_decay_schedule(initial_lr, decay_factor=0.5, step_size=1):
    '''
    Wrapper function to create a LearningRateScheduler with step decay schedule.
    '''
    def schedule(epoch):
        new_lr = initial_lr * (decay_factor ** np.floor(epoch/step_size))
        return new_lr
    return LearningRateScheduler(schedule)

変分オートエンコーダ Variational Auto Encoder

In [6]:
# DGL_code/models/VAE.py

from tensorflow.keras.layers import Input, Conv2D, LeakyReLU, BatchNormalization, Dropout, Flatten, Dense, Reshape, Conv2DTranspose, Activation
from tensorflow.keras import backend as K
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import plot_model
from tensorflow.keras.callbacks import ModelCheckpoint

import os
import pickle


class VariationalAutoencoder():
    def __init__(self, 
                input_dim,
                encoder_conv_filters,
                encoder_conv_kernel_size,
                encoder_conv_strides,
                decoder_conv_t_filters,
                decoder_conv_t_kernel_size,
                decoder_conv_t_strides,
                z_dim,
                 r_loss_factor,   ### added
                use_batch_norm = False,
                use_dropout = False
                ):
            self.name = 'variational_autoencoder'
            self.input_dim = input_dim
            self.encoder_conv_filters = encoder_conv_filters
            self.encoder_conv_kernel_size = encoder_conv_kernel_size
            self.encoder_conv_strides = encoder_conv_strides
            self.decoder_conv_t_filters = decoder_conv_t_filters
            self.decoder_conv_t_kernel_size = decoder_conv_t_kernel_size
            self.decoder_conv_t_strides = decoder_conv_t_strides
            self.z_dim = z_dim
            self.r_loss_factor = r_loss_factor   ### added
            
            self.use_batch_norm = use_batch_norm
            self.use_dropout = use_dropout
            
            self.n_layers_encoder = len(encoder_conv_filters)
            self.n_layers_decoder = len(decoder_conv_t_filters)
            
            self._build()
 

    def _build(self):
        ### THE ENCODER
        encoder_input = Input(shape=self.input_dim, name='encoder_input')
        x = encoder_input
        
        for i in range(self.n_layers_encoder):
            conv_layer =Conv2D(
                filters = self.encoder_conv_filters[i],
                kernel_size = self.encoder_conv_kernel_size[i],
                strides = self.encoder_conv_strides[i],
                padding  = 'same',
                name = 'encoder_conv_' + str(i)
            )
            x = conv_layer(x)

            if self.use_batch_norm:                   ### The order of layers is opposite to AutoEncoder
                x = BatchNormalization()(x)        ###   AE: LeakyReLU -> BatchNorm
            x = LeakyReLU()(x)                           ###   VAE: BatchNorm -> LeakyReLU
            
            if self.use_dropout:
                x = Dropout(rate = 0.25)(x)
        
        shape_before_flattening = K.int_shape(x)[1:]
        
        x = Flatten()(x)
        
        self.mu = Dense(self.z_dim, name='mu')(x)    ### added
        self.log_var = Dense(self.z_dim, name='log_var')(x)  ### added
        self.z = Sampling(name='encoder_output')([self.mu, self.log_var]) ### added
        
        self.encoder = Model(encoder_input, [self.mu, self.log_var, self.z], name='encoder')   ### added
        
        # encoder_output = Dense(self.z_dim, name='encoder_output')(x)   ### deleted      

        # self.encoder = Model(encoder_input, encoder_output)   ### deleted
        
        ### THE DECODER
        decoder_input = Input(shape=(self.z_dim,), name='decoder_input')
        x = Dense(np.prod(shape_before_flattening))(decoder_input)
        x = Reshape(shape_before_flattening)(x)
        
        for i in range(self.n_layers_decoder):
            conv_t_layer =   Conv2DTranspose(
                filters = self.decoder_conv_t_filters[i],
                kernel_size = self.decoder_conv_t_kernel_size[i],
                strides = self.decoder_conv_t_strides[i],
                padding = 'same',
                name = 'decoder_conv_t_' + str(i)
            )
            x = conv_t_layer(x)
            
            if i < self.n_layers_decoder - 1:
                if self.use_batch_norm:              ### The order of layers is opposite to AutoEncoder
                    x = BatchNormalization()(x)   ###     AE: LeakyReLU -> BatchNorm
                x = LeakyReLU()(x)                     ###      VAE: BatchNorm -> LeakyReLU                
                if self.use_dropout:
                    x = Dropout(rate=0.25)(x)
            else:
                x = Activation('sigmoid')(x)
       
        decoder_output = x
        self.decoder = Model(decoder_input, decoder_output, name='decoder')  ### added (name)
        #self.decoder = Model(decoder_input, decoder_output)                               ### deleted
        
        ### THE FULL AUTOENCODER
        self.model = VAEModel(self.encoder, self.decoder, self.r_loss_factor)
        
        #model_input = encoder_input                                       ### deleted
        #model_output = self.decoder(encoder_output)         ### deleted
        
        #self.model = Model(model_input, model_output)      ### deleted

        
    def compile(self, learning_rate):
        self.learning_rate = learning_rate
        optimizer = Adam(lr=learning_rate)
        #def r_loss(y_true, y_pred):                                                            ### deleted
        #    return K.mean(K.square(y_true - y_pred), axis = [1,2,3])     ### deleted
        #self.model.compile(optimizer=optimizer, loss = r_loss)              ### deleted
        self.model.compile(optimizer=optimizer)   ### added
        
        
    def save(self, folder):
        if not os.path.exists(folder):
            os.makedirs(folder)
            os.makedirs(os.path.join(folder, 'viz'))
            os.makedirs(os.path.join(folder, 'weights'))
            os.makedirs(os.path.join(folder, 'images'))
            
        with open(os.path.join(folder, 'params.pkl'), 'wb') as f:
            pickle.dump([
                self.input_dim,
                self.encoder_conv_filters,
                self.encoder_conv_kernel_size,
                self.encoder_conv_strides,
                self.decoder_conv_t_filters,
                self.decoder_conv_t_kernel_size,
                self.decoder_conv_t_strides,
                self.z_dim,
                self.use_batch_norm,
                self.use_dropout
            ], f)
            
        self.plot_model(folder)
        
        
    def plot_model(self, run_folder):
        ### start of section added by nitta
        path = os.path.join(run_folder, 'viz')
        if not os.path.exists(path):
            os.makedirs(path)
        ### end of section added by nitta
        plot_model(self.model, to_file=os.path.join(run_folder, 'viz/model.png'), show_shapes=True, show_layer_names=True)
        plot_model(self.encoder, to_file=os.path.join(run_folder, 'viz/encoder.png'), show_shapes=True, show_layer_names=True)
        plot_model(self.decoder, to_file=os.path.join(run_folder, 'viz/decoder.png'), show_shapes=True, show_layer_names=True)

        
    def load_weights(self, filepath):
        self.model.load_weights(filepath)
        
        
    def train(self, x_train, batch_size, epochs, run_folder, print_every_n_batches=100, initial_epoch=0, lr_decay=1):
        custom_callback = CustomCallback(run_folder, print_every_n_batches, initial_epoch, self)
        lr_sched = step_decay_schedule(initial_lr=self.learning_rate, decay_factor=lr_decay, step_size=1)
        
        checkpoint_filepath = os.path.join(run_folder, "weights/weights-{epoch:03d}-{loss:.2f}.h5")         ### added (Bug?)
        checkpoint1 = ModelCheckpoint(checkpoint_filepath, save_weights_only=True, verbose=1)           ### added
        checkpoint2 = ModelCheckpoint(os.path.join(run_folder, 'weights/weights.h5'), save_weights_only=True, verbose=1)
        #callbacks_list = [checkpoint1, checkpoint2, custom_callback, lr_sched]   ### added
        callbacks_list = [checkpoint2, custom_callback, lr_sched]  ### deleted
        self.model.fit(
            x_train,
            x_train,
            batch_size = batch_size,
            shuffle = True,
            epochs = epochs,
            initial_epoch = initial_epoch,
            callbacks = callbacks_list)
        
    ### added
    ###   第2引数のdata_flow はgeneratorである。model.fit()の第1引数x=input_dataにgeneratorが渡される場合は第2引数y=target_dataは指定しない(xから得られるので)。 
    ###   引数に steps_per_epoch が追加されて、model.fit()の引数に渡す点が異なる
    def train_with_generator(self, data_flow, epochs, steps_per_epoch, run_folder, print_every_n_batches=100, initial_epoch=0, lr_decay=1):
        custom_callback = CustomCallback(run_folder, print_every_n_batches, initial_epoch, self)
        lr_sched = step_decay_schedule(initial_lr=self.learning_rate, decay_factor=lr_decay, step_size=1)
        checkpoint_filepath = os.path.join(run_folder, "weights/weights-{epoch:03d}-{loss:.2f}.h5")         ###(Bug?)
        checkpoint1 = ModelCheckpoint(checkpoint_filepath, save_weights_only=True, verbose=1) 
        checkpoint2 = ModelCheckpoint(os.path.join(run_folder, 'weights/weights.h5'), save_weights_only=True, verbose=1)
        #callbacks_list = [checkpoint1, checkpoint2, custom_callback, lr_sched] 
        callbacks_list = [checkpoint2, custom_callback, lr_sched] 
        self.model.fit(
            data_flow,
            shuffle = True,
            epochs = epochs,
            initial_epoch = initial_epoch,
            callbacks = callbacks_list,
            steps_per_epoch=steps_per_epoch)
In [7]:
# GDL_code/models/VAE.py

from tensorflow.keras.layers import Layer
from tensorflow.keras import backend as K

class Sampling(Layer):
    def call(self, inputs):
        mu, log_var = inputs
        epsilon = K.random_normal(shape=K.shape(mu), mean=0., stddev=1.)
        return mu + K.exp(log_var / 2) * epsilon

TensorFlow の GradientTape クラス

GradientTape とは勾配を求めるためのクラスである。

[疑問点のメモ 2021/02/18]

3章
  GDL_code/models/VAE.py
  VAEModelクラスの定義
  46行目: grads = tape.gradient(...)
    tape 変数は35行目からの with ... as tape: で定義されているので、46行目もそのscopeの中にある必要はないだろうか?
    すなわちもう1段インデントを深くする必要はないだろうか?

[疑問点解消 2021/02/18] 次のように記述するのは普通、とのこと。with+asで指定した変数がwithの外でも使えるようだ。

  x = tf.constant(...)
  with tf.GradientTape() as g:     # gに内部の計算を記録する
    g.watch(x)   # xを記録
    y = xの式    # 微分する式
  g.gradient(y, x).numpy()    # xの値に対するyの傾き(=y')を求める
In [8]:
import tensorflow as tf

class VAEModel(Model):
    def __init__(self, encoder, decoder, r_loss_factor, **kwargs):
        super(VAEModel, self).__init__(**kwargs)
        self.encoder = encoder
        self.decoder = decoder
        self.r_loss_factor = r_loss_factor

    def train_step(self, data):
        if isinstance(data, tuple):
            data = data[0]
        with tf.GradientTape() as tape:
            z_mean, z_log_var, z = self.encoder(data)
            reconstruction = self.decoder(z)
            reconstruction_loss = tf.reduce_mean(
                tf.square(data - reconstruction), axis = [1,2,3]
            )
            reconstruction_loss *= self.r_loss_factor
            kl_loss = 1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var)
            kl_loss = tf.reduce_sum(kl_loss, axis = 1)
            kl_loss *= -0.5
            total_loss = reconstruction_loss + kl_loss
        grads = tape.gradient(total_loss, self.trainable_weights)
        self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
        return {
            "loss": total_loss,
            "reconstruction_loss": reconstruction_loss,
            "kl_loss": kl_loss,
        }

    def call(self,inputs):
        latent = self.encoder(inputs)
        return self.decoder(latent)
In [9]:
# run params
# [自分へのメモ] os.mkdir() 関数はpathの途中のフォルダが存在しないとエラーとなるので os.makedirs() 関数に変更した。

SECTION = 'vae'
RUN_ID = '0002'
DATA_NAME = 'digits'
RUN_FOLDER = 'run/{}/'.format(SECTION)
RUN_FOLDER += '_'.join([RUN_ID, DATA_NAME])

if not os.path.exists(RUN_FOLDER):
    os.makedirs(RUN_FOLDER)
    
for s in ['viz', 'images', 'weights']:
    path = os.path.join(RUN_FOLDER, s)
    if not os.path.exists(path):
        os.makedirs(path)
        
mode = 'build' # 'load'

データをロードする

In [10]:
(x_train, y_train), (x_test, y_test) = load_mnist()

ニューラルネットワークモデルを作成する

In [11]:
vae = VariationalAutoencoder(
    input_dim = (28, 28, 1),
    encoder_conv_filters = [32, 64, 64, 64],
    encoder_conv_kernel_size = [3, 3, 3, 3],
    encoder_conv_strides = [1, 2, 2, 1],
    decoder_conv_t_filters = [64, 64, 32, 1],
    decoder_conv_t_kernel_size = [3, 3, 3, 3],
    decoder_conv_t_strides = [1, 2, 2, 1],
    z_dim = 2,
    r_loss_factor = 1000
)
In [12]:
if mode == 'build':
    vae.save(RUN_FOLDER)
else:
    vae.load_weights(os.path.join(RUN_FOLDER, 'weights/weights.h5'))
In [13]:
vae.encoder.summary()
Model: "encoder"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
encoder_input (InputLayer)      [(None, 28, 28, 1)]  0                                            
__________________________________________________________________________________________________
encoder_conv_0 (Conv2D)         (None, 28, 28, 32)   320         encoder_input[0][0]              
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU)         (None, 28, 28, 32)   0           encoder_conv_0[0][0]             
__________________________________________________________________________________________________
encoder_conv_1 (Conv2D)         (None, 14, 14, 64)   18496       leaky_re_lu[0][0]                
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU)       (None, 14, 14, 64)   0           encoder_conv_1[0][0]             
__________________________________________________________________________________________________
encoder_conv_2 (Conv2D)         (None, 7, 7, 64)     36928       leaky_re_lu_1[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU)       (None, 7, 7, 64)     0           encoder_conv_2[0][0]             
__________________________________________________________________________________________________
encoder_conv_3 (Conv2D)         (None, 7, 7, 64)     36928       leaky_re_lu_2[0][0]              
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU)       (None, 7, 7, 64)     0           encoder_conv_3[0][0]             
__________________________________________________________________________________________________
flatten (Flatten)               (None, 3136)         0           leaky_re_lu_3[0][0]              
__________________________________________________________________________________________________
mu (Dense)                      (None, 2)            6274        flatten[0][0]                    
__________________________________________________________________________________________________
log_var (Dense)                 (None, 2)            6274        flatten[0][0]                    
__________________________________________________________________________________________________
encoder_output (Sampling)       (None, 2)            0           mu[0][0]                         
                                                                 log_var[0][0]                    
==================================================================================================
Total params: 105,220
Trainable params: 105,220
Non-trainable params: 0
__________________________________________________________________________________________________
In [14]:
vae.decoder.summary()
Model: "decoder"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
decoder_input (InputLayer)   [(None, 2)]               0         
_________________________________________________________________
dense (Dense)                (None, 3136)              9408      
_________________________________________________________________
reshape (Reshape)            (None, 7, 7, 64)          0         
_________________________________________________________________
decoder_conv_t_0 (Conv2DTran (None, 7, 7, 64)          36928     
_________________________________________________________________
leaky_re_lu_4 (LeakyReLU)    (None, 7, 7, 64)          0         
_________________________________________________________________
decoder_conv_t_1 (Conv2DTran (None, 14, 14, 64)        36928     
_________________________________________________________________
leaky_re_lu_5 (LeakyReLU)    (None, 14, 14, 64)        0         
_________________________________________________________________
decoder_conv_t_2 (Conv2DTran (None, 28, 28, 32)        18464     
_________________________________________________________________
leaky_re_lu_6 (LeakyReLU)    (None, 28, 28, 32)        0         
_________________________________________________________________
decoder_conv_t_3 (Conv2DTran (None, 28, 28, 1)         289       
_________________________________________________________________
activation (Activation)      (None, 28, 28, 1)         0         
=================================================================
Total params: 102,017
Trainable params: 102,017
Non-trainable params: 0
_________________________________________________________________

training

In [15]:
LEARNING_RATE = 0.0005

vae.compile(LEARNING_RATE)
In [16]:
BATCH_SIZE = 32
EPOCHS = 200
PRINT_EVERY_N_BATCHES = 100
INITIAL_EPOCH = 0

[自分へのメモ]

ダウンロードしたソースコード GDL_code/03_03_vae_digits_train.ipynb ではここでエラーがおきる。

...(略)...
~/anaconda3/envs/generative/lib/python3.7/site-packages/tensorflow/python/keras/callbacks.py in _get_file_path(self, epoch, logs)
   1242         # `{mape:.2f}`. A mismatch between logged metrics and the path's
   1243         # placeholders can cause formatting to fail.
-> 1244         return self.filepath.format(epoch=epoch + 1, **logs)
   1245       except KeyError as e:
   1246         raise KeyError('Failed to format this callback filepath: "{}". '

TypeError: unsupported format string passed to numpy.ndarray.__format__
[実験1]
これは、上で指摘したepockが本当はepocksで、そのため見つからないというエラーではないか? ← 違う。文字列出力の時にepochキーで値を指定される。

[実験2]
VariationalAutoencoder クラスの train() 関数、train_with_generator() 関数において、
callbacks リストからcheckpoint1 をはずしてみる。

vae.train() がうまく動いているように見える。 MacBookAir だと約 200 秒/epoch、Windows (GPU GTX2070, gtune2)だと 22秒/epoch。 GPUで10倍高速化できている。

In [17]:
vae.train(
    x_train,
    batch_size = BATCH_SIZE,
    epochs = EPOCHS,
    run_folder = RUN_FOLDER,
    print_every_n_batches = PRINT_EVERY_N_BATCHES,
    initial_epoch = INITIAL_EPOCH
)
Epoch 1/200
   1/1875 [..............................] - ETA: 0s - loss: 229.8893 - reconstruction_loss: 229.8869 - kl_loss: 0.0025WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.174850). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 58.4362 - reconstruction_loss: 55.1907 - kl_loss: 3.2454
Epoch 00001: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 58.4232 - reconstruction_loss: 55.1768 - kl_loss: 3.2464 - lr: 5.0000e-04
Epoch 2/200
1871/1875 [============================>.] - ETA: 0s - loss: 51.8596 - reconstruction_loss: 47.9841 - kl_loss: 3.8755
Epoch 00002: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 51.8576 - reconstruction_loss: 47.9825 - kl_loss: 3.8752 - lr: 5.0000e-04
Epoch 3/200
   1/1875 [..............................] - ETA: 0s - loss: 52.4876 - reconstruction_loss: 48.8823 - kl_loss: 3.6053WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.106043). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 50.3972 - reconstruction_loss: 46.1973 - kl_loss: 4.1998
Epoch 00003: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 50.3953 - reconstruction_loss: 46.1951 - kl_loss: 4.2002 - lr: 5.0000e-04
Epoch 4/200
1871/1875 [============================>.] - ETA: 0s - loss: 49.4654 - reconstruction_loss: 45.0782 - kl_loss: 4.3872
Epoch 00004: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 49.4588 - reconstruction_loss: 45.0719 - kl_loss: 4.3869 - lr: 5.0000e-04
Epoch 5/200
1871/1875 [============================>.] - ETA: 0s - loss: 48.8563 - reconstruction_loss: 44.3168 - kl_loss: 4.5395
Epoch 00005: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 48.8536 - reconstruction_loss: 44.3139 - kl_loss: 4.5398 - lr: 5.0000e-04
Epoch 6/200
   1/1875 [..............................] - ETA: 0s - loss: 48.9723 - reconstruction_loss: 44.1783 - kl_loss: 4.7940WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.101285). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 48.3398 - reconstruction_loss: 43.7060 - kl_loss: 4.6339
Epoch 00006: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 48.3377 - reconstruction_loss: 43.7040 - kl_loss: 4.6337 - lr: 5.0000e-04
Epoch 7/200
1871/1875 [============================>.] - ETA: 0s - loss: 47.9350 - reconstruction_loss: 43.2207 - kl_loss: 4.7143
Epoch 00007: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 47.9396 - reconstruction_loss: 43.2258 - kl_loss: 4.7138 - lr: 5.0000e-04
Epoch 8/200
1871/1875 [============================>.] - ETA: 0s - loss: 47.5806 - reconstruction_loss: 42.8038 - kl_loss: 4.7768
Epoch 00008: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 47.5832 - reconstruction_loss: 42.8064 - kl_loss: 4.7768 - lr: 5.0000e-04
Epoch 9/200
1871/1875 [============================>.] - ETA: 0s - loss: 47.3196 - reconstruction_loss: 42.4948 - kl_loss: 4.8248
Epoch 00009: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 47.3259 - reconstruction_loss: 42.5011 - kl_loss: 4.8249 - lr: 5.0000e-04
Epoch 10/200
1871/1875 [============================>.] - ETA: 0s - loss: 47.1026 - reconstruction_loss: 42.2282 - kl_loss: 4.8744
Epoch 00010: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 47.0993 - reconstruction_loss: 42.2249 - kl_loss: 4.8744 - lr: 5.0000e-04
Epoch 11/200
   1/1875 [..............................] - ETA: 0s - loss: 50.4831 - reconstruction_loss: 45.8741 - kl_loss: 4.6090WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.111411). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 46.8857 - reconstruction_loss: 41.9717 - kl_loss: 4.9140
Epoch 00011: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 24s 13ms/step - loss: 46.8886 - reconstruction_loss: 41.9738 - kl_loss: 4.9148 - lr: 5.0000e-04
Epoch 12/200
1871/1875 [============================>.] - ETA: 0s - loss: 46.6905 - reconstruction_loss: 41.7478 - kl_loss: 4.9427
Epoch 00012: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 46.6737 - reconstruction_loss: 41.7312 - kl_loss: 4.9425 - lr: 5.0000e-04
Epoch 13/200
1871/1875 [============================>.] - ETA: 0s - loss: 46.5336 - reconstruction_loss: 41.5552 - kl_loss: 4.9784
Epoch 00013: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 46.5387 - reconstruction_loss: 41.5606 - kl_loss: 4.9782 - lr: 5.0000e-04
Epoch 14/200
1871/1875 [============================>.] - ETA: 0s - loss: 46.4077 - reconstruction_loss: 41.4125 - kl_loss: 4.9952
Epoch 00014: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 46.4049 - reconstruction_loss: 41.4091 - kl_loss: 4.9959 - lr: 5.0000e-04
Epoch 15/200
1873/1875 [============================>.] - ETA: 0s - loss: 46.2553 - reconstruction_loss: 41.2360 - kl_loss: 5.0193
Epoch 00015: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 46.2593 - reconstruction_loss: 41.2400 - kl_loss: 5.0193 - lr: 5.0000e-04
Epoch 16/200
1871/1875 [============================>.] - ETA: 0s - loss: 46.1214 - reconstruction_loss: 41.0801 - kl_loss: 5.0413
Epoch 00016: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 46.1192 - reconstruction_loss: 41.0772 - kl_loss: 5.0420 - lr: 5.0000e-04
Epoch 17/200
1871/1875 [============================>.] - ETA: 0s - loss: 46.0187 - reconstruction_loss: 40.9644 - kl_loss: 5.0544
Epoch 00017: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 46.0254 - reconstruction_loss: 40.9711 - kl_loss: 5.0543 - lr: 5.0000e-04
Epoch 18/200
1871/1875 [============================>.] - ETA: 0s - loss: 45.8995 - reconstruction_loss: 40.8186 - kl_loss: 5.0809
Epoch 00018: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.9033 - reconstruction_loss: 40.8216 - kl_loss: 5.0817 - lr: 5.0000e-04
Epoch 19/200
1871/1875 [============================>.] - ETA: 0s - loss: 45.8137 - reconstruction_loss: 40.7258 - kl_loss: 5.0879
Epoch 00019: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.8137 - reconstruction_loss: 40.7259 - kl_loss: 5.0879 - lr: 5.0000e-04
Epoch 20/200
1872/1875 [============================>.] - ETA: 0s - loss: 45.7223 - reconstruction_loss: 40.6192 - kl_loss: 5.1031
Epoch 00020: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.7248 - reconstruction_loss: 40.6214 - kl_loss: 5.1033 - lr: 5.0000e-04
Epoch 21/200
1871/1875 [============================>.] - ETA: 0s - loss: 45.6539 - reconstruction_loss: 40.5429 - kl_loss: 5.1110
Epoch 00021: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.6489 - reconstruction_loss: 40.5378 - kl_loss: 5.1111 - lr: 5.0000e-04
Epoch 22/200
   1/1875 [..............................] - ETA: 0s - loss: 45.2371 - reconstruction_loss: 40.0877 - kl_loss: 5.1493WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.101258). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 45.6237 - reconstruction_loss: 40.4918 - kl_loss: 5.1319
Epoch 00022: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.6292 - reconstruction_loss: 40.4976 - kl_loss: 5.1316 - lr: 5.0000e-04
Epoch 23/200
1871/1875 [============================>.] - ETA: 0s - loss: 45.5466 - reconstruction_loss: 40.3949 - kl_loss: 5.1517
Epoch 00023: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.5513 - reconstruction_loss: 40.3992 - kl_loss: 5.1520 - lr: 5.0000e-04
Epoch 24/200
   1/1875 [..............................] - ETA: 0s - loss: 46.6234 - reconstruction_loss: 41.5343 - kl_loss: 5.0891WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.111315). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 45.4582 - reconstruction_loss: 40.3073 - kl_loss: 5.1509
Epoch 00024: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.4670 - reconstruction_loss: 40.3159 - kl_loss: 5.1511 - lr: 5.0000e-04
Epoch 25/200
   1/1875 [..............................] - ETA: 0s - loss: 48.4715 - reconstruction_loss: 42.8326 - kl_loss: 5.6388WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.120935). Check your callbacks.
1872/1875 [============================>.] - ETA: 0s - loss: 45.4241 - reconstruction_loss: 40.2447 - kl_loss: 5.1793- ETA: 1s - loss: 45.4379 - reconstruction_loss: 40.
Epoch 00025: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 45.4212 - reconstruction_loss: 40.2420 - kl_loss: 5.1792 - lr: 5.0000e-04
Epoch 26/200
1871/1875 [============================>.] - ETA: 0s - loss: 45.3228 - reconstruction_loss: 40.1460 - kl_loss: 5.1767
Epoch 00026: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 24s 13ms/step - loss: 45.3156 - reconstruction_loss: 40.1387 - kl_loss: 5.1769 - lr: 5.0000e-04
Epoch 27/200
1871/1875 [============================>.] - ETA: 0s - loss: 45.2642 - reconstruction_loss: 40.0785 - kl_loss: 5.1857
Epoch 00027: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.2618 - reconstruction_loss: 40.0764 - kl_loss: 5.1854 - lr: 5.0000e-04
Epoch 28/200
   1/1875 [..............................] - ETA: 0s - loss: 45.0427 - reconstruction_loss: 40.0114 - kl_loss: 5.0313WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.166257). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 45.2036 - reconstruction_loss: 40.0077 - kl_loss: 5.1959
Epoch 00028: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.2134 - reconstruction_loss: 40.0177 - kl_loss: 5.1956 - lr: 5.0000e-04
Epoch 29/200
1871/1875 [============================>.] - ETA: 0s - loss: 45.1868 - reconstruction_loss: 39.9804 - kl_loss: 5.2063
Epoch 00029: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.1841 - reconstruction_loss: 39.9776 - kl_loss: 5.2066 - lr: 5.0000e-04
Epoch 30/200
1871/1875 [============================>.] - ETA: 0s - loss: 45.1344 - reconstruction_loss: 39.9258 - kl_loss: 5.2085
Epoch 00030: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.1273 - reconstruction_loss: 39.9194 - kl_loss: 5.2079 - lr: 5.0000e-04
Epoch 31/200
1871/1875 [============================>.] - ETA: 0s - loss: 45.0729 - reconstruction_loss: 39.8498 - kl_loss: 5.2231
Epoch 00031: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.0697 - reconstruction_loss: 39.8469 - kl_loss: 5.2228 - lr: 5.0000e-04
Epoch 32/200
1871/1875 [============================>.] - ETA: 0s - loss: 45.0240 - reconstruction_loss: 39.8101 - kl_loss: 5.2139
Epoch 00032: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 45.0221 - reconstruction_loss: 39.8083 - kl_loss: 5.2138 - lr: 5.0000e-04
Epoch 33/200
   1/1875 [..............................] - ETA: 0s - loss: 41.9870 - reconstruction_loss: 36.9440 - kl_loss: 5.0430WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.108130). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 44.9988 - reconstruction_loss: 39.7582 - kl_loss: 5.2406
Epoch 00033: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.9975 - reconstruction_loss: 39.7572 - kl_loss: 5.2403 - lr: 5.0000e-04
Epoch 34/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.9334 - reconstruction_loss: 39.6992 - kl_loss: 5.2342
Epoch 00034: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.9322 - reconstruction_loss: 39.6979 - kl_loss: 5.2342 - lr: 5.0000e-04
Epoch 35/200
1874/1875 [============================>.] - ETA: 0s - loss: 44.8893 - reconstruction_loss: 39.6422 - kl_loss: 5.2471
Epoch 00035: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.8878 - reconstruction_loss: 39.6405 - kl_loss: 5.2473 - lr: 5.0000e-04
Epoch 36/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.8688 - reconstruction_loss: 39.6197 - kl_loss: 5.2491
Epoch 00036: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 44.8645 - reconstruction_loss: 39.6155 - kl_loss: 5.2490 - lr: 5.0000e-04
Epoch 37/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.8020 - reconstruction_loss: 39.5345 - kl_loss: 5.2675
Epoch 00037: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 44.8046 - reconstruction_loss: 39.5374 - kl_loss: 5.2672 - lr: 5.0000e-04
Epoch 38/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.7759 - reconstruction_loss: 39.5048 - kl_loss: 5.2711
Epoch 00038: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.7785 - reconstruction_loss: 39.5081 - kl_loss: 5.2704 - lr: 5.0000e-04
Epoch 39/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.7409 - reconstruction_loss: 39.4742 - kl_loss: 5.2667
Epoch 00039: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.7451 - reconstruction_loss: 39.4782 - kl_loss: 5.2668 - lr: 5.0000e-04
Epoch 40/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.7363 - reconstruction_loss: 39.4598 - kl_loss: 5.2765
Epoch 00040: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.7290 - reconstruction_loss: 39.4524 - kl_loss: 5.2767 - lr: 5.0000e-04
Epoch 41/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.6730 - reconstruction_loss: 39.3982 - kl_loss: 5.2748
Epoch 00041: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 25s 13ms/step - loss: 44.6768 - reconstruction_loss: 39.4024 - kl_loss: 5.2744 - lr: 5.0000e-04
Epoch 42/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.6397 - reconstruction_loss: 39.3604 - kl_loss: 5.2794
Epoch 00042: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 44.6426 - reconstruction_loss: 39.3627 - kl_loss: 5.2799 - lr: 5.0000e-04
Epoch 43/200
   1/1875 [..............................] - ETA: 0s - loss: 48.9175 - reconstruction_loss: 43.4915 - kl_loss: 5.4261WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.123369). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 44.6168 - reconstruction_loss: 39.3371 - kl_loss: 5.2798
Epoch 00043: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.6247 - reconstruction_loss: 39.3451 - kl_loss: 5.2796 - lr: 5.0000e-04
Epoch 44/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.5954 - reconstruction_loss: 39.2966 - kl_loss: 5.2988
Epoch 00044: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.5959 - reconstruction_loss: 39.2971 - kl_loss: 5.2987 - lr: 5.0000e-04
Epoch 45/200
   1/1875 [..............................] - ETA: 0s - loss: 43.1554 - reconstruction_loss: 37.7577 - kl_loss: 5.3977WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.110952). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 44.5451 - reconstruction_loss: 39.2402 - kl_loss: 5.3050
Epoch 00045: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 44.5423 - reconstruction_loss: 39.2375 - kl_loss: 5.3048 - lr: 5.0000e-04
Epoch 46/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.5602 - reconstruction_loss: 39.2375 - kl_loss: 5.3227
Epoch 00046: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 44.5572 - reconstruction_loss: 39.2342 - kl_loss: 5.3230 - lr: 5.0000e-04
Epoch 47/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.5190 - reconstruction_loss: 39.2107 - kl_loss: 5.3083
Epoch 00047: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.5198 - reconstruction_loss: 39.2109 - kl_loss: 5.3089 - lr: 5.0000e-04
Epoch 48/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.4618 - reconstruction_loss: 39.1529 - kl_loss: 5.3090
Epoch 00048: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.4678 - reconstruction_loss: 39.1586 - kl_loss: 5.3092 - lr: 5.0000e-04
Epoch 49/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.4595 - reconstruction_loss: 39.1426 - kl_loss: 5.3168
Epoch 00049: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.4634 - reconstruction_loss: 39.1464 - kl_loss: 5.3170 - lr: 5.0000e-04
Epoch 50/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.4067 - reconstruction_loss: 39.0885 - kl_loss: 5.3183
Epoch 00050: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.4090 - reconstruction_loss: 39.0913 - kl_loss: 5.3177 - lr: 5.0000e-04
Epoch 51/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.3861 - reconstruction_loss: 39.0543 - kl_loss: 5.3318
Epoch 00051: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.3883 - reconstruction_loss: 39.0566 - kl_loss: 5.3317 - lr: 5.0000e-04
Epoch 52/200
   1/1875 [..............................] - ETA: 0s - loss: 42.0956 - reconstruction_loss: 36.7547 - kl_loss: 5.3409WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.105479). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 44.3738 - reconstruction_loss: 39.0423 - kl_loss: 5.3315
Epoch 00052: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.3781 - reconstruction_loss: 39.0470 - kl_loss: 5.3311 - lr: 5.0000e-04
Epoch 53/200
1872/1875 [============================>.] - ETA: 0s - loss: 44.3529 - reconstruction_loss: 39.0229 - kl_loss: 5.3300
Epoch 00053: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 44.3478 - reconstruction_loss: 39.0180 - kl_loss: 5.3298 - lr: 5.0000e-04
Epoch 54/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.3330 - reconstruction_loss: 39.0074 - kl_loss: 5.3256
Epoch 00054: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 44.3275 - reconstruction_loss: 39.0020 - kl_loss: 5.3255 - lr: 5.0000e-04
Epoch 55/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.2752 - reconstruction_loss: 38.9437 - kl_loss: 5.3315
Epoch 00055: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.2709 - reconstruction_loss: 38.9396 - kl_loss: 5.3313 - lr: 5.0000e-04
Epoch 56/200
   1/1875 [..............................] - ETA: 0s - loss: 46.4123 - reconstruction_loss: 41.2205 - kl_loss: 5.1918WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.101977). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 44.2810 - reconstruction_loss: 38.9252 - kl_loss: 5.3558
Epoch 00056: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.2833 - reconstruction_loss: 38.9281 - kl_loss: 5.3553 - lr: 5.0000e-04
Epoch 57/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.2467 - reconstruction_loss: 38.9081 - kl_loss: 5.3386
Epoch 00057: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.2472 - reconstruction_loss: 38.9084 - kl_loss: 5.3388 - lr: 5.0000e-04
Epoch 58/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.1988 - reconstruction_loss: 38.8511 - kl_loss: 5.3477
Epoch 00058: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 44.2061 - reconstruction_loss: 38.8585 - kl_loss: 5.3476 - lr: 5.0000e-04
Epoch 59/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.2155 - reconstruction_loss: 38.8424 - kl_loss: 5.3731
Epoch 00059: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.2149 - reconstruction_loss: 38.8421 - kl_loss: 5.3727 - lr: 5.0000e-04
Epoch 60/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.2052 - reconstruction_loss: 38.8356 - kl_loss: 5.3696
Epoch 00060: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.2019 - reconstruction_loss: 38.8330 - kl_loss: 5.3689 - lr: 5.0000e-04
Epoch 61/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.1852 - reconstruction_loss: 38.8208 - kl_loss: 5.3644
Epoch 00061: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.1878 - reconstruction_loss: 38.8232 - kl_loss: 5.3646 - lr: 5.0000e-04
Epoch 62/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.1539 - reconstruction_loss: 38.7959 - kl_loss: 5.3580
Epoch 00062: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.1498 - reconstruction_loss: 38.7917 - kl_loss: 5.3581 - lr: 5.0000e-04
Epoch 63/200
   1/1875 [..............................] - ETA: 0s - loss: 44.8120 - reconstruction_loss: 39.1292 - kl_loss: 5.6828WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.110465). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 44.1322 - reconstruction_loss: 38.7529 - kl_loss: 5.3793
Epoch 00063: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.1337 - reconstruction_loss: 38.7544 - kl_loss: 5.3793 - lr: 5.0000e-04
Epoch 64/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.0973 - reconstruction_loss: 38.7169 - kl_loss: 5.3804
Epoch 00064: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.0920 - reconstruction_loss: 38.7115 - kl_loss: 5.3806 - lr: 5.0000e-04
Epoch 65/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.1000 - reconstruction_loss: 38.7238 - kl_loss: 5.3762
Epoch 00065: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.0944 - reconstruction_loss: 38.7185 - kl_loss: 5.3758 - lr: 5.0000e-04
Epoch 66/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.0698 - reconstruction_loss: 38.6818 - kl_loss: 5.3880
Epoch 00066: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.0715 - reconstruction_loss: 38.6829 - kl_loss: 5.3886 - lr: 5.0000e-04
Epoch 67/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.0881 - reconstruction_loss: 38.7056 - kl_loss: 5.3825
Epoch 00067: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.0835 - reconstruction_loss: 38.7014 - kl_loss: 5.3821 - lr: 5.0000e-04
Epoch 68/200
   1/1875 [..............................] - ETA: 0s - loss: 45.8951 - reconstruction_loss: 40.4749 - kl_loss: 5.4202WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.103999). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 44.0661 - reconstruction_loss: 38.6604 - kl_loss: 5.4057
Epoch 00068: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.0730 - reconstruction_loss: 38.6675 - kl_loss: 5.4055 - lr: 5.0000e-04
Epoch 69/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.0555 - reconstruction_loss: 38.6550 - kl_loss: 5.4005
Epoch 00069: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.0462 - reconstruction_loss: 38.6456 - kl_loss: 5.4006 - lr: 5.0000e-04
Epoch 70/200
1871/1875 [============================>.] - ETA: 0s - loss: 44.0128 - reconstruction_loss: 38.6166 - kl_loss: 5.3962
Epoch 00070: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 44.0136 - reconstruction_loss: 38.6175 - kl_loss: 5.3961 - lr: 5.0000e-04
Epoch 71/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.9636 - reconstruction_loss: 38.5677 - kl_loss: 5.3959
Epoch 00071: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.9615 - reconstruction_loss: 38.5657 - kl_loss: 5.3958 - lr: 5.0000e-04
Epoch 72/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.9312 - reconstruction_loss: 38.5249 - kl_loss: 5.4063
Epoch 00072: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 28s 15ms/step - loss: 43.9333 - reconstruction_loss: 38.5271 - kl_loss: 5.4062 - lr: 5.0000e-04
Epoch 73/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.9590 - reconstruction_loss: 38.5637 - kl_loss: 5.3953
Epoch 00073: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.9614 - reconstruction_loss: 38.5655 - kl_loss: 5.3958 - lr: 5.0000e-04
Epoch 74/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.9123 - reconstruction_loss: 38.5132 - kl_loss: 5.3991
Epoch 00074: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.9109 - reconstruction_loss: 38.5116 - kl_loss: 5.3993 - lr: 5.0000e-04
Epoch 75/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.9305 - reconstruction_loss: 38.5166 - kl_loss: 5.4139
Epoch 00075: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.9326 - reconstruction_loss: 38.5183 - kl_loss: 5.4143 - lr: 5.0000e-04
Epoch 76/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.9083 - reconstruction_loss: 38.4986 - kl_loss: 5.4097
Epoch 00076: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.9076 - reconstruction_loss: 38.4980 - kl_loss: 5.4096 - lr: 5.0000e-04
Epoch 77/200
1872/1875 [============================>.] - ETA: 0s - loss: 43.8762 - reconstruction_loss: 38.4718 - kl_loss: 5.4044
Epoch 00077: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.8818 - reconstruction_loss: 38.4776 - kl_loss: 5.4042 - lr: 5.0000e-04
Epoch 78/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.9243 - reconstruction_loss: 38.5078 - kl_loss: 5.4164
Epoch 00078: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.9231 - reconstruction_loss: 38.5070 - kl_loss: 5.4161 - lr: 5.0000e-04
Epoch 79/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.8772 - reconstruction_loss: 38.4624 - kl_loss: 5.4148
Epoch 00079: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.8711 - reconstruction_loss: 38.4561 - kl_loss: 5.4150 - lr: 5.0000e-04
Epoch 80/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.8771 - reconstruction_loss: 38.4626 - kl_loss: 5.4145
Epoch 00080: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.8756 - reconstruction_loss: 38.4607 - kl_loss: 5.4149 - lr: 5.0000e-04
Epoch 81/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.8631 - reconstruction_loss: 38.4352 - kl_loss: 5.4279
Epoch 00081: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.8607 - reconstruction_loss: 38.4333 - kl_loss: 5.4274 - lr: 5.0000e-04
Epoch 82/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.8120 - reconstruction_loss: 38.3893 - kl_loss: 5.4228
Epoch 00082: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.8140 - reconstruction_loss: 38.3911 - kl_loss: 5.4229 - lr: 5.0000e-04
Epoch 83/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.7893 - reconstruction_loss: 38.3682 - kl_loss: 5.4211
Epoch 00083: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.7904 - reconstruction_loss: 38.3695 - kl_loss: 5.4209 - lr: 5.0000e-04
Epoch 84/200
   1/1875 [..............................] - ETA: 0s - loss: 45.8253 - reconstruction_loss: 40.4004 - kl_loss: 5.4249WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.106696). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 43.8048 - reconstruction_loss: 38.3758 - kl_loss: 5.4290- ETA: 3s - los
Epoch 00084: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.7982 - reconstruction_loss: 38.3690 - kl_loss: 5.4291 - lr: 5.0000e-04
Epoch 85/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.7562 - reconstruction_loss: 38.3324 - kl_loss: 5.4238
Epoch 00085: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.7653 - reconstruction_loss: 38.3422 - kl_loss: 5.4231 - lr: 5.0000e-04
Epoch 86/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.7516 - reconstruction_loss: 38.3183 - kl_loss: 5.4333
Epoch 00086: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.7543 - reconstruction_loss: 38.3211 - kl_loss: 5.4332 - lr: 5.0000e-04
Epoch 87/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.7543 - reconstruction_loss: 38.3254 - kl_loss: 5.4289
Epoch 00087: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.7594 - reconstruction_loss: 38.3305 - kl_loss: 5.4289 - lr: 5.0000e-04
Epoch 88/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.7567 - reconstruction_loss: 38.3248 - kl_loss: 5.4319
Epoch 00088: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.7589 - reconstruction_loss: 38.3273 - kl_loss: 5.4316 - lr: 5.0000e-04
Epoch 89/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.7515 - reconstruction_loss: 38.3173 - kl_loss: 5.4342
Epoch 00089: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.7517 - reconstruction_loss: 38.3175 - kl_loss: 5.4342 - lr: 5.0000e-04
Epoch 90/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.7269 - reconstruction_loss: 38.2842 - kl_loss: 5.4427
Epoch 00090: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.7300 - reconstruction_loss: 38.2873 - kl_loss: 5.4427 - lr: 5.0000e-04
Epoch 91/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.7363 - reconstruction_loss: 38.2975 - kl_loss: 5.4388
Epoch 00091: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.7408 - reconstruction_loss: 38.3016 - kl_loss: 5.4392 - lr: 5.0000e-04
Epoch 92/200
1872/1875 [============================>.] - ETA: 0s - loss: 43.7057 - reconstruction_loss: 38.2591 - kl_loss: 5.4465
Epoch 00092: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.7016 - reconstruction_loss: 38.2549 - kl_loss: 5.4467 - lr: 5.0000e-04
Epoch 93/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.6761 - reconstruction_loss: 38.2257 - kl_loss: 5.4505
Epoch 00093: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.6812 - reconstruction_loss: 38.2310 - kl_loss: 5.4502 - lr: 5.0000e-04
Epoch 94/200
1872/1875 [============================>.] - ETA: 0s - loss: 43.6812 - reconstruction_loss: 38.2353 - kl_loss: 5.4459
Epoch 00094: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.6726 - reconstruction_loss: 38.2268 - kl_loss: 5.4458 - lr: 5.0000e-04
Epoch 95/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.7068 - reconstruction_loss: 38.2634 - kl_loss: 5.4433
Epoch 00095: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.7063 - reconstruction_loss: 38.2630 - kl_loss: 5.4433 - lr: 5.0000e-04
Epoch 96/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.6866 - reconstruction_loss: 38.2257 - kl_loss: 5.4609
Epoch 00096: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.6897 - reconstruction_loss: 38.2294 - kl_loss: 5.4603 - lr: 5.0000e-04
Epoch 97/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.6167 - reconstruction_loss: 38.1680 - kl_loss: 5.4487
Epoch 00097: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.6195 - reconstruction_loss: 38.1705 - kl_loss: 5.4491 - lr: 5.0000e-04
Epoch 98/200
1873/1875 [============================>.] - ETA: 0s - loss: 43.6195 - reconstruction_loss: 38.1760 - kl_loss: 5.4435
Epoch 00098: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.6157 - reconstruction_loss: 38.1721 - kl_loss: 5.4435 - lr: 5.0000e-04
Epoch 99/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.6041 - reconstruction_loss: 38.1460 - kl_loss: 5.4581
Epoch 00099: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.6032 - reconstruction_loss: 38.1453 - kl_loss: 5.4579 - lr: 5.0000e-04
Epoch 100/200
1873/1875 [============================>.] - ETA: 0s - loss: 43.6218 - reconstruction_loss: 38.1740 - kl_loss: 5.4478
Epoch 00100: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.6154 - reconstruction_loss: 38.1678 - kl_loss: 5.4476 - lr: 5.0000e-04
Epoch 101/200
   1/1875 [..............................] - ETA: 0s - loss: 45.8600 - reconstruction_loss: 40.4402 - kl_loss: 5.4198WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.101320). Check your callbacks.
1875/1875 [==============================] - ETA: 0s - loss: 43.6291 - reconstruction_loss: 38.1701 - kl_loss: 5.4591
Epoch 00101: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.6277 - reconstruction_loss: 38.1687 - kl_loss: 5.4590 - lr: 5.0000e-04
Epoch 102/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.5748 - reconstruction_loss: 38.1301 - kl_loss: 5.4447
Epoch 00102: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.5699 - reconstruction_loss: 38.1245 - kl_loss: 5.4453 - lr: 5.0000e-04
Epoch 103/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.5755 - reconstruction_loss: 38.1195 - kl_loss: 5.4559
Epoch 00103: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.5773 - reconstruction_loss: 38.1214 - kl_loss: 5.4560 - lr: 5.0000e-04
Epoch 104/200
1873/1875 [============================>.] - ETA: 0s - loss: 43.5473 - reconstruction_loss: 38.0855 - kl_loss: 5.4618
Epoch 00104: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.5454 - reconstruction_loss: 38.0834 - kl_loss: 5.4620 - lr: 5.0000e-04
Epoch 105/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.5706 - reconstruction_loss: 38.0981 - kl_loss: 5.4725
Epoch 00105: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.5745 - reconstruction_loss: 38.1025 - kl_loss: 5.4721 - lr: 5.0000e-04
Epoch 106/200
1875/1875 [==============================] - ETA: 0s - loss: 43.5539 - reconstruction_loss: 38.0885 - kl_loss: 5.4653
Epoch 00106: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.5529 - reconstruction_loss: 38.0876 - kl_loss: 5.4653 - lr: 5.0000e-04
Epoch 107/200
1873/1875 [============================>.] - ETA: 0s - loss: 43.5179 - reconstruction_loss: 38.0506 - kl_loss: 5.4672
Epoch 00107: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.5195 - reconstruction_loss: 38.0521 - kl_loss: 5.4674 - lr: 5.0000e-04
Epoch 108/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.5484 - reconstruction_loss: 38.0711 - kl_loss: 5.4774
Epoch 00108: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.5449 - reconstruction_loss: 38.0677 - kl_loss: 5.4772 - lr: 5.0000e-04
Epoch 109/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.5202 - reconstruction_loss: 38.0501 - kl_loss: 5.4701
Epoch 00109: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.5200 - reconstruction_loss: 38.0499 - kl_loss: 5.4701 - lr: 5.0000e-04
Epoch 110/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.5222 - reconstruction_loss: 38.0424 - kl_loss: 5.4797
Epoch 00110: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 21s 11ms/step - loss: 43.5232 - reconstruction_loss: 38.0440 - kl_loss: 5.4792 - lr: 5.0000e-04
Epoch 111/200
1874/1875 [============================>.] - ETA: 0s - loss: 43.5234 - reconstruction_loss: 38.0557 - kl_loss: 5.4677
Epoch 00111: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.5185 - reconstruction_loss: 38.0506 - kl_loss: 5.4679 - lr: 5.0000e-04
Epoch 112/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.4947 - reconstruction_loss: 38.0172 - kl_loss: 5.4775
Epoch 00112: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.4943 - reconstruction_loss: 38.0170 - kl_loss: 5.4773 - lr: 5.0000e-04
Epoch 113/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.4632 - reconstruction_loss: 37.9879 - kl_loss: 5.4753
Epoch 00113: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.4588 - reconstruction_loss: 37.9833 - kl_loss: 5.4755 - lr: 5.0000e-04
Epoch 114/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.4630 - reconstruction_loss: 37.9971 - kl_loss: 5.4659
Epoch 00114: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.4668 - reconstruction_loss: 38.0008 - kl_loss: 5.4660 - lr: 5.0000e-04
Epoch 115/200
1872/1875 [============================>.] - ETA: 0s - loss: 43.4407 - reconstruction_loss: 37.9615 - kl_loss: 5.4792
Epoch 00115: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.4433 - reconstruction_loss: 37.9643 - kl_loss: 5.4789 - lr: 5.0000e-04
Epoch 116/200
1872/1875 [============================>.] - ETA: 0s - loss: 43.4576 - reconstruction_loss: 37.9589 - kl_loss: 5.4988
Epoch 00116: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.4583 - reconstruction_loss: 37.9597 - kl_loss: 5.4987 - lr: 5.0000e-04
Epoch 117/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.4710 - reconstruction_loss: 37.9884 - kl_loss: 5.4826
Epoch 00117: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.4650 - reconstruction_loss: 37.9822 - kl_loss: 5.4828 - lr: 5.0000e-04
Epoch 118/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.4389 - reconstruction_loss: 37.9518 - kl_loss: 5.4871
Epoch 00118: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.4410 - reconstruction_loss: 37.9539 - kl_loss: 5.4871 - lr: 5.0000e-04
Epoch 119/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.4153 - reconstruction_loss: 37.9268 - kl_loss: 5.4885
Epoch 00119: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.4213 - reconstruction_loss: 37.9332 - kl_loss: 5.4882 - lr: 5.0000e-04
Epoch 120/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.4304 - reconstruction_loss: 37.9337 - kl_loss: 5.4967
Epoch 00120: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.4279 - reconstruction_loss: 37.9308 - kl_loss: 5.4971 - lr: 5.0000e-04
Epoch 121/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.4270 - reconstruction_loss: 37.9431 - kl_loss: 5.4840
Epoch 00121: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.4262 - reconstruction_loss: 37.9419 - kl_loss: 5.4843 - lr: 5.0000e-04
Epoch 122/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.4131 - reconstruction_loss: 37.9186 - kl_loss: 5.4946
Epoch 00122: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.4092 - reconstruction_loss: 37.9149 - kl_loss: 5.4943 - lr: 5.0000e-04
Epoch 123/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.4050 - reconstruction_loss: 37.9033 - kl_loss: 5.5018
Epoch 00123: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.4015 - reconstruction_loss: 37.8996 - kl_loss: 5.5019 - lr: 5.0000e-04
Epoch 124/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3577 - reconstruction_loss: 37.8634 - kl_loss: 5.4943
Epoch 00124: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.3660 - reconstruction_loss: 37.8715 - kl_loss: 5.4945 - lr: 5.0000e-04
Epoch 125/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3613 - reconstruction_loss: 37.8690 - kl_loss: 5.4923
Epoch 00125: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.3633 - reconstruction_loss: 37.8715 - kl_loss: 5.4918 - lr: 5.0000e-04
Epoch 126/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3388 - reconstruction_loss: 37.8433 - kl_loss: 5.4955
Epoch 00126: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.3392 - reconstruction_loss: 37.8436 - kl_loss: 5.4956 - lr: 5.0000e-04
Epoch 127/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3783 - reconstruction_loss: 37.8686 - kl_loss: 5.5097
Epoch 00127: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.3749 - reconstruction_loss: 37.8652 - kl_loss: 5.5097 - lr: 5.0000e-04
Epoch 128/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3438 - reconstruction_loss: 37.8487 - kl_loss: 5.4951
Epoch 00128: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.3447 - reconstruction_loss: 37.8499 - kl_loss: 5.4948 - lr: 5.0000e-04
Epoch 129/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3283 - reconstruction_loss: 37.8384 - kl_loss: 5.4899
Epoch 00129: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.3334 - reconstruction_loss: 37.8439 - kl_loss: 5.4895 - lr: 5.0000e-04
Epoch 130/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3474 - reconstruction_loss: 37.8373 - kl_loss: 5.5101
Epoch 00130: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.3505 - reconstruction_loss: 37.8408 - kl_loss: 5.5096 - lr: 5.0000e-04
Epoch 131/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3473 - reconstruction_loss: 37.8348 - kl_loss: 5.5125
Epoch 00131: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.3462 - reconstruction_loss: 37.8331 - kl_loss: 5.5131 - lr: 5.0000e-04
Epoch 132/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3151 - reconstruction_loss: 37.8112 - kl_loss: 5.5039
Epoch 00132: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.3243 - reconstruction_loss: 37.8201 - kl_loss: 5.5041 - lr: 5.0000e-04
Epoch 133/200
   1/1875 [..............................] - ETA: 0s - loss: 41.0312 - reconstruction_loss: 35.4891 - kl_loss: 5.5420WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.108623). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 43.3343 - reconstruction_loss: 37.8241 - kl_loss: 5.5102
Epoch 00133: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.3351 - reconstruction_loss: 37.8251 - kl_loss: 5.5100 - lr: 5.0000e-04
Epoch 134/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3206 - reconstruction_loss: 37.8110 - kl_loss: 5.5097
Epoch 00134: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.3194 - reconstruction_loss: 37.8099 - kl_loss: 5.5095 - lr: 5.0000e-04
Epoch 135/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3111 - reconstruction_loss: 37.8061 - kl_loss: 5.5050
Epoch 00135: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.3100 - reconstruction_loss: 37.8055 - kl_loss: 5.5046 - lr: 5.0000e-04
Epoch 136/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2641 - reconstruction_loss: 37.7594 - kl_loss: 5.5048
Epoch 00136: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2657 - reconstruction_loss: 37.7612 - kl_loss: 5.5045 - lr: 5.0000e-04
Epoch 137/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2952 - reconstruction_loss: 37.7916 - kl_loss: 5.5036
Epoch 00137: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2977 - reconstruction_loss: 37.7944 - kl_loss: 5.5033 - lr: 5.0000e-04
Epoch 138/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.3092 - reconstruction_loss: 37.7993 - kl_loss: 5.5099
Epoch 00138: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.3166 - reconstruction_loss: 37.8067 - kl_loss: 5.5100 - lr: 5.0000e-04
Epoch 139/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2937 - reconstruction_loss: 37.7694 - kl_loss: 5.5243
Epoch 00139: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.2986 - reconstruction_loss: 37.7748 - kl_loss: 5.5238 - lr: 5.0000e-04
Epoch 140/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2724 - reconstruction_loss: 37.7588 - kl_loss: 5.5137
Epoch 00140: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2798 - reconstruction_loss: 37.7665 - kl_loss: 5.5133 - lr: 5.0000e-04
Epoch 141/200
1872/1875 [============================>.] - ETA: 0s - loss: 43.2673 - reconstruction_loss: 37.7601 - kl_loss: 5.5072
Epoch 00141: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2628 - reconstruction_loss: 37.7555 - kl_loss: 5.5073 - lr: 5.0000e-04
Epoch 142/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2507 - reconstruction_loss: 37.7389 - kl_loss: 5.5118
Epoch 00142: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2484 - reconstruction_loss: 37.7366 - kl_loss: 5.5118 - lr: 5.0000e-04
Epoch 143/200
   1/1875 [..............................] - ETA: 0s - loss: 38.8791 - reconstruction_loss: 33.5414 - kl_loss: 5.3377WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.117209). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 43.2531 - reconstruction_loss: 37.7232 - kl_loss: 5.5299
Epoch 00143: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2615 - reconstruction_loss: 37.7318 - kl_loss: 5.5297 - lr: 5.0000e-04
Epoch 144/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2765 - reconstruction_loss: 37.7509 - kl_loss: 5.5255
Epoch 00144: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2745 - reconstruction_loss: 37.7488 - kl_loss: 5.5257 - lr: 5.0000e-04
Epoch 145/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2290 - reconstruction_loss: 37.6951 - kl_loss: 5.5339
Epoch 00145: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2293 - reconstruction_loss: 37.6952 - kl_loss: 5.5341 - lr: 5.0000e-04
Epoch 146/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2233 - reconstruction_loss: 37.7003 - kl_loss: 5.5230
Epoch 00146: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2173 - reconstruction_loss: 37.6942 - kl_loss: 5.5231 - lr: 5.0000e-04
Epoch 147/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2411 - reconstruction_loss: 37.6966 - kl_loss: 5.5444
Epoch 00147: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2364 - reconstruction_loss: 37.6916 - kl_loss: 5.5448 - lr: 5.0000e-04
Epoch 148/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2044 - reconstruction_loss: 37.6806 - kl_loss: 5.5238
Epoch 00148: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1981 - reconstruction_loss: 37.6745 - kl_loss: 5.5236 - lr: 5.0000e-04
Epoch 149/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2076 - reconstruction_loss: 37.6873 - kl_loss: 5.5202
Epoch 00149: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2127 - reconstruction_loss: 37.6934 - kl_loss: 5.5194 - lr: 5.0000e-04
Epoch 150/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2302 - reconstruction_loss: 37.7006 - kl_loss: 5.5296
Epoch 00150: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.2285 - reconstruction_loss: 37.6990 - kl_loss: 5.5295 - lr: 5.0000e-04
Epoch 151/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1782 - reconstruction_loss: 37.6587 - kl_loss: 5.5196
Epoch 00151: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1792 - reconstruction_loss: 37.6594 - kl_loss: 5.5198 - lr: 5.0000e-04
Epoch 152/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1975 - reconstruction_loss: 37.6615 - kl_loss: 5.5360
Epoch 00152: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.2036 - reconstruction_loss: 37.6677 - kl_loss: 5.5359 - lr: 5.0000e-04
Epoch 153/200
1872/1875 [============================>.] - ETA: 0s - loss: 43.2063 - reconstruction_loss: 37.6672 - kl_loss: 5.5391
Epoch 00153: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.2037 - reconstruction_loss: 37.6645 - kl_loss: 5.5392 - lr: 5.0000e-04
Epoch 154/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.2257 - reconstruction_loss: 37.6929 - kl_loss: 5.5328
Epoch 00154: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.2236 - reconstruction_loss: 37.6907 - kl_loss: 5.5329 - lr: 5.0000e-04
Epoch 155/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1813 - reconstruction_loss: 37.6543 - kl_loss: 5.5269
Epoch 00155: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1903 - reconstruction_loss: 37.6638 - kl_loss: 5.5265 - lr: 5.0000e-04
Epoch 156/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1663 - reconstruction_loss: 37.6342 - kl_loss: 5.5322
Epoch 00156: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1665 - reconstruction_loss: 37.6340 - kl_loss: 5.5325 - lr: 5.0000e-04
Epoch 157/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1191 - reconstruction_loss: 37.5838 - kl_loss: 5.5353
Epoch 00157: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.1282 - reconstruction_loss: 37.5932 - kl_loss: 5.5350 - lr: 5.0000e-04
Epoch 158/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1892 - reconstruction_loss: 37.6523 - kl_loss: 5.5369
Epoch 00158: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1970 - reconstruction_loss: 37.6604 - kl_loss: 5.5367 - lr: 5.0000e-04
Epoch 159/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1445 - reconstruction_loss: 37.6031 - kl_loss: 5.5413
Epoch 00159: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1443 - reconstruction_loss: 37.6024 - kl_loss: 5.5419 - lr: 5.0000e-04
Epoch 160/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1628 - reconstruction_loss: 37.6203 - kl_loss: 5.5425
Epoch 00160: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.1514 - reconstruction_loss: 37.6086 - kl_loss: 5.5428 - lr: 5.0000e-04
Epoch 161/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1633 - reconstruction_loss: 37.6195 - kl_loss: 5.5438
Epoch 00161: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1589 - reconstruction_loss: 37.6149 - kl_loss: 5.5440 - lr: 5.0000e-04
Epoch 162/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1523 - reconstruction_loss: 37.6128 - kl_loss: 5.5395
Epoch 00162: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.1550 - reconstruction_loss: 37.6157 - kl_loss: 5.5393 - lr: 5.0000e-04
Epoch 163/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1143 - reconstruction_loss: 37.5692 - kl_loss: 5.5452
Epoch 00163: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.1242 - reconstruction_loss: 37.5793 - kl_loss: 5.5449 - lr: 5.0000e-04
Epoch 164/200
   1/1875 [..............................] - ETA: 0s - loss: 40.7066 - reconstruction_loss: 35.2510 - kl_loss: 5.4556WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.100444). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 43.1405 - reconstruction_loss: 37.6081 - kl_loss: 5.5324
Epoch 00164: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.1449 - reconstruction_loss: 37.6124 - kl_loss: 5.5325 - lr: 5.0000e-04
Epoch 165/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0939 - reconstruction_loss: 37.5525 - kl_loss: 5.5414- ETA: 1s - loss: 43.0775 - reconstructi
Epoch 00165: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0916 - reconstruction_loss: 37.5501 - kl_loss: 5.5414 - lr: 5.0000e-04
Epoch 166/200
1872/1875 [============================>.] - ETA: 0s - loss: 43.1642 - reconstruction_loss: 37.6168 - kl_loss: 5.5475
Epoch 00166: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.1707 - reconstruction_loss: 37.6235 - kl_loss: 5.5472 - lr: 5.0000e-04
Epoch 167/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1226 - reconstruction_loss: 37.5632 - kl_loss: 5.5594
Epoch 00167: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1282 - reconstruction_loss: 37.5689 - kl_loss: 5.5592 - lr: 5.0000e-04
Epoch 168/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1080 - reconstruction_loss: 37.5681 - kl_loss: 5.5399
Epoch 00168: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1051 - reconstruction_loss: 37.5649 - kl_loss: 5.5402 - lr: 5.0000e-04
Epoch 169/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1063 - reconstruction_loss: 37.5636 - kl_loss: 5.5428
Epoch 00169: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1133 - reconstruction_loss: 37.5703 - kl_loss: 5.5430 - lr: 5.0000e-04
Epoch 170/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1105 - reconstruction_loss: 37.5567 - kl_loss: 5.5538
Epoch 00170: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1107 - reconstruction_loss: 37.5569 - kl_loss: 5.5538 - lr: 5.0000e-04
Epoch 171/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0596 - reconstruction_loss: 37.5026 - kl_loss: 5.5571
Epoch 00171: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.0669 - reconstruction_loss: 37.5097 - kl_loss: 5.5573 - lr: 5.0000e-04
Epoch 172/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.1334 - reconstruction_loss: 37.5762 - kl_loss: 5.5572
Epoch 00172: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.1320 - reconstruction_loss: 37.5744 - kl_loss: 5.5576 - lr: 5.0000e-04
Epoch 173/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0878 - reconstruction_loss: 37.5289 - kl_loss: 5.5589
Epoch 00173: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0816 - reconstruction_loss: 37.5225 - kl_loss: 5.5591 - lr: 5.0000e-04
Epoch 174/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0558 - reconstruction_loss: 37.5073 - kl_loss: 5.5485
Epoch 00174: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.0562 - reconstruction_loss: 37.5073 - kl_loss: 5.5489 - lr: 5.0000e-04
Epoch 175/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0818 - reconstruction_loss: 37.5267 - kl_loss: 5.5551
Epoch 00175: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0857 - reconstruction_loss: 37.5311 - kl_loss: 5.5546 - lr: 5.0000e-04
Epoch 176/200
   1/1875 [..............................] - ETA: 0s - loss: 44.3812 - reconstruction_loss: 39.2738 - kl_loss: 5.1073WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.130283). Check your callbacks.
1871/1875 [============================>.] - ETA: 0s - loss: 43.0469 - reconstruction_loss: 37.4977 - kl_loss: 5.5492
Epoch 00176: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0473 - reconstruction_loss: 37.4982 - kl_loss: 5.5490 - lr: 5.0000e-04
Epoch 177/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0525 - reconstruction_loss: 37.4918 - kl_loss: 5.5607
Epoch 00177: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0499 - reconstruction_loss: 37.4893 - kl_loss: 5.5605 - lr: 5.0000e-04
Epoch 178/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0703 - reconstruction_loss: 37.5175 - kl_loss: 5.5527
Epoch 00178: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0721 - reconstruction_loss: 37.5191 - kl_loss: 5.5530 - lr: 5.0000e-04
Epoch 179/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0368 - reconstruction_loss: 37.4752 - kl_loss: 5.5616
Epoch 00179: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0420 - reconstruction_loss: 37.4808 - kl_loss: 5.5612 - lr: 5.0000e-04
Epoch 180/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0397 - reconstruction_loss: 37.4879 - kl_loss: 5.5518
Epoch 00180: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0398 - reconstruction_loss: 37.4878 - kl_loss: 5.5520 - lr: 5.0000e-04
Epoch 181/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0516 - reconstruction_loss: 37.5024 - kl_loss: 5.5492
Epoch 00181: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0608 - reconstruction_loss: 37.5115 - kl_loss: 5.5493 - lr: 5.0000e-04
Epoch 182/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0179 - reconstruction_loss: 37.4668 - kl_loss: 5.5511
Epoch 00182: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0221 - reconstruction_loss: 37.4711 - kl_loss: 5.5510 - lr: 5.0000e-04
Epoch 183/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0274 - reconstruction_loss: 37.4613 - kl_loss: 5.5661
Epoch 00183: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.0314 - reconstruction_loss: 37.4648 - kl_loss: 5.5666 - lr: 5.0000e-04
Epoch 184/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0188 - reconstruction_loss: 37.4603 - kl_loss: 5.5585
Epoch 00184: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0091 - reconstruction_loss: 37.4503 - kl_loss: 5.5589 - lr: 5.0000e-04
Epoch 185/200
1872/1875 [============================>.] - ETA: 0s - loss: 43.0056 - reconstruction_loss: 37.4443 - kl_loss: 5.5613
Epoch 00185: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0044 - reconstruction_loss: 37.4429 - kl_loss: 5.5615 - lr: 5.0000e-04
Epoch 186/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0168 - reconstruction_loss: 37.4655 - kl_loss: 5.5513
Epoch 00186: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.0182 - reconstruction_loss: 37.4671 - kl_loss: 5.5511 - lr: 5.0000e-04
Epoch 187/200
1871/1875 [============================>.] - ETA: 0s - loss: 43.0022 - reconstruction_loss: 37.4453 - kl_loss: 5.5569
Epoch 00187: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0027 - reconstruction_loss: 37.4460 - kl_loss: 5.5567 - lr: 5.0000e-04
Epoch 188/200
1872/1875 [============================>.] - ETA: 0s - loss: 43.0151 - reconstruction_loss: 37.4482 - kl_loss: 5.5669- ETA: 0s - loss: 43.0088 - reconstruction_loss: 37.4414 - kl_loss:
Epoch 00188: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 43.0110 - reconstruction_loss: 37.4439 - kl_loss: 5.5671 - lr: 5.0000e-04
Epoch 189/200
1871/1875 [============================>.] - ETA: 0s - loss: 42.9756 - reconstruction_loss: 37.4189 - kl_loss: 5.5568
Epoch 00189: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 42.9759 - reconstruction_loss: 37.4188 - kl_loss: 5.5572 - lr: 5.0000e-04
Epoch 190/200
1871/1875 [============================>.] - ETA: 0s - loss: 42.9925 - reconstruction_loss: 37.4364 - kl_loss: 5.5561
Epoch 00190: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 42.9893 - reconstruction_loss: 37.4331 - kl_loss: 5.5563 - lr: 5.0000e-04
Epoch 191/200
1871/1875 [============================>.] - ETA: 0s - loss: 42.9849 - reconstruction_loss: 37.4216 - kl_loss: 5.5633
Epoch 00191: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 42.9809 - reconstruction_loss: 37.4177 - kl_loss: 5.5632 - lr: 5.0000e-04
Epoch 192/200
1871/1875 [============================>.] - ETA: 0s - loss: 42.9990 - reconstruction_loss: 37.4263 - kl_loss: 5.5727
Epoch 00192: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 43.0069 - reconstruction_loss: 37.4341 - kl_loss: 5.5728 - lr: 5.0000e-04
Epoch 193/200
1871/1875 [============================>.] - ETA: 0s - loss: 42.9824 - reconstruction_loss: 37.4149 - kl_loss: 5.5675
Epoch 00193: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 42.9867 - reconstruction_loss: 37.4191 - kl_loss: 5.5676 - lr: 5.0000e-04
Epoch 194/200
1873/1875 [============================>.] - ETA: 0s - loss: 43.0013 - reconstruction_loss: 37.4296 - kl_loss: 5.5718
Epoch 00194: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 42.9948 - reconstruction_loss: 37.4227 - kl_loss: 5.5722 - lr: 5.0000e-04
Epoch 195/200
1873/1875 [============================>.] - ETA: 0s - loss: 42.9749 - reconstruction_loss: 37.4144 - kl_loss: 5.5605
Epoch 00195: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 42.9737 - reconstruction_loss: 37.4134 - kl_loss: 5.5603 - lr: 5.0000e-04
Epoch 196/200
1874/1875 [============================>.] - ETA: 0s - loss: 42.9663 - reconstruction_loss: 37.3932 - kl_loss: 5.5731
Epoch 00196: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 24s 13ms/step - loss: 42.9684 - reconstruction_loss: 37.3955 - kl_loss: 5.5729 - lr: 5.0000e-04
Epoch 197/200
   1/1875 [..............................] - ETA: 0s - loss: 42.0069 - reconstruction_loss: 36.5797 - kl_loss: 5.4272WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.105742). Check your callbacks.
1873/1875 [============================>.] - ETA: 0s - loss: 42.9739 - reconstruction_loss: 37.3991 - kl_loss: 5.5749
Epoch 00197: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 23s 12ms/step - loss: 42.9747 - reconstruction_loss: 37.4000 - kl_loss: 5.5746 - lr: 5.0000e-04
Epoch 198/200
1875/1875 [==============================] - ETA: 0s - loss: 42.9874 - reconstruction_loss: 37.4234 - kl_loss: 5.5640
Epoch 00198: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 42.9877 - reconstruction_loss: 37.4238 - kl_loss: 5.5639 - lr: 5.0000e-04
Epoch 199/200
1873/1875 [============================>.] - ETA: 0s - loss: 42.9666 - reconstruction_loss: 37.4075 - kl_loss: 5.5590
Epoch 00199: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 42.9679 - reconstruction_loss: 37.4088 - kl_loss: 5.5590 - lr: 5.0000e-04
Epoch 200/200
1871/1875 [============================>.] - ETA: 0s - loss: 42.9449 - reconstruction_loss: 37.3793 - kl_loss: 5.5656
Epoch 00200: saving model to run/vae/0002_digits\weights/weights.h5
1875/1875 [==============================] - 22s 12ms/step - loss: 42.9461 - reconstruction_loss: 37.3802 - kl_loss: 5.5658 - lr: 5.0000e-04

[自分へのメモ (2020/02/20)]

checkpoint1 をはずすと動いたが、正しくtraining できたか、後で確認しておくこと。

3.4.2 損失関数

前の損失関数は、入力画像と、エンコード+デコードした画像の間の 最小2乗誤差 だった。 これに KL 情報量 (Lullback-Leibler divergence)を追加する。 KL情報量は、ある確率分布がもう1つの確率分布とどれだけ違っているかを測定する方法である。

kl_loss = -0.5 * sum(1 + log_var - mu ^ 2 - exp(log_var))

$\displaystyle D_{KL} [N(\mu, sigma)][N(0,1)] = - \frac{1}{2} \sum (1 + \log(\sigma^2) - \mu^2 - \sigma^2) $

この合計は、潜在空間のすべての次元にわたって計算される。 kl_loss は、すべての次元で mu = 0 かつ log_var = 0 のときに最小の $0$ となる。

KL情報量の候は、ネットワークが観測値を標準正規分布のパラメータ (mu=0 かつ lova_var = 0) から大きく異なる mu と var_log 変数に縁故=ードすることに対してペナルティを課していることになる。

In [ ]: