Skip to content Skip to sidebar Skip to footer

"TypeError: Input 'global_step' Of 'ResourceApplyAdagradDA' Op Has Type Int32 That Does Not Match Expected Type Of Int64." What Is This Bug?

While I was trying to use the AdaGradDual Optimizer, I got an error for the batch size I had entered. The batch size I entered was 300 because I have 60000 samples to train. My co

Solution 1:

Looking at the error message:

TypeError: Input 'global_step' of 'ResourceApplyAdagradDA' Op has type int32 that does not match expected type of int64

It seems that the second parameter to the optimizer is expected to be int64. Since you are giving a Python integer it is converted to int32 by default. Try with this:

optimizer1 = tf.compat.v1.train.AdagradDAOptimizer(0.001, tf.constant(0, tf.int64))

I'm not sure if this is completely correct though, I think the training step might need to be a variable that you increment after each step. I think it should work like this, but maybe it will behave as if the optimizer was in the first step all the time.


Post a Comment for ""TypeError: Input 'global_step' Of 'ResourceApplyAdagradDA' Op Has Type Int32 That Does Not Match Expected Type Of Int64." What Is This Bug?"