13 Security Lab

[ERR] tf.get_variable_scope() (which is identical to vscope) is explicitly created than the implicit default? 본문

Computer Science/Programming

[ERR] tf.get_variable_scope() (which is identical to vscope) is explicitly created than the implicit default?

Maj0r Tom 2018. 6. 24. 23:01

Tensorflow Error 


Error Message:

Is it just because that the tf.get_variable_scope() (which is identical to vscope) is explicitly created than the implicit default? Then, what do these two VariableScope objects differ in?



Error Code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
with graph.as_default():
    # Average the gradients
    grads_e = average_gradients(tower_grads_e)
    grads_g = average_gradients(tower_grads_g)
    grads_d = average_gradients(tower_grads_d)
 
    # apply the gradients with our optimizers
    train_E = opt_E.apply_gradients(grads_e, global_step=global_step)
    train_G = opt_G.apply_gradients(grads_g, global_step=global_step)
    train_D = opt_D.apply_gradients(grads_d, global_step=global_step)
 
---------------------------
initiated opt_E from....
 
with graph.as_default():
    # with tf.Graph().as_default(), tf.device('/cpu:0'):
    # Create a variable to count number of train calls
    global_step = tf.get_variable(
        'global_step', [],
        initializer=tf.constant_initializer(0), trainable=False)
 
 
    # different optimizers are needed for different learning rates 
# (using the same learning rate seems to work fine though)
    lr_D = tf.placeholder(tf.float32, shape=[])
    lr_G = tf.placeholder(tf.float32, shape=[])
    lr_E = tf.placeholder(tf.float32, shape=[])
    opt_D = tf.train.AdamOptimizer(lr_D, epsilon=1.0)
    opt_G = tf.train.AdamOptimizer(lr_G, epsilon=1.0)
    opt_E = tf.train.AdamOptimizer(lr_E, epsilon=1.0)
 
-------------------------------------------------



Problem:

get_variable_scope is leaking reuse



solution:

use "reuse = tf.AUTO_REUSE" option in get_variable_scope 

like with tf.variable_scope(tf.get_variable_scope(), reuse = tf.AUTO_REUSE) as scope:

(I didn't try but "resue = False" also works probably)


Fixed Code:

with graph.as_default(): with tf.variable_scope(tf.get_variable_scope(), reuse = tf.AUTO_REUSE): # Average the gradients grads_e = average_gradients(tower_grads_e) grads_g = average_gradients(tower_grads_g) grads_d = average_gradients(tower_grads_d) # apply the gradients with our optimizers train_E = opt_E.apply_gradients(grads_e, global_step=global_step) train_G = opt_G.apply_gradients(grads_g, global_step=global_step) train_D = opt_D.apply_gradients(grads_d, global_step=global_step) -------------------







Reference

https://github.com/tensorflow/tensorflow/issues/6220

https://github.com/aperwe/GANs/issues/2







Comments