What does mean «train_config» → «batch_size» in TensorFlow?

The batch size is the amount of samples you feed in your network.

stackoverflow.com/a/42999449

The size of this batch (batch_size) is the number of training samples used for this training pass.
You are approximating the loss, and therefore the gradient of your whole dataset by just computing it over batch_size samples.
This basic terminology is explained in many introductory courses to neural networks.

stackoverflow.com/a/49469568

train_steps basically counts the batches.
During training, you'll read batch_size*train_step csv rows, so you have to make sure that this number is lower than total_rows_csv*num_epochs in your input reader; or that num_epochs=None , it'll cycle indefinitely through your data.
You'll train on the whole data once (so for train_steps=total_rows_csv/batch_size), that is 1 epoch, then it'll go again over the same data, etc.

stackoverflow.com/questions/48766174#comment84609085_48783124

Batch size is the number of samples you put into for each training round.
So for each epoch, you can split your training sets into multiple batches.
For example, I have 1000 images.
If I set my batch size to 1, then for each epoch (training round), my input into the network will be 1 x 1000 images.
If set my batch size to 2, then it will be 2 x 500 images.
Meaning, for each epoch, I will run two rounds, each round using 500 images.
Step is just the learning rate that you use for your optimizer.
Usually, we start with 0.001 or 0.01.
I recommend that you watch Andrew Ng's Machine Learning videos on Coursera to get a good understanding on ML if you want to have a good overall understanding.

groups.google.com/a/tensorflow.org/d/msg/discuss/hjSd-Cl53B4/bVlKTO4GBgAJ

This defines the number of work elements in your batch.
Tensorflow requires a fixed number and doesn’t take into consideration GPU memory or data size.
This number is highly dependent on your GPU hardware and image dimensions, and isn’t strictly necessary for quality results.
Tensorflow requires each input array to have the same dimensionality, which means that any batch_size > 1 requires an image_resizer of fixed_shape_resizer

blog.algorithmia.com/deep-dive-into-object-detection-with-open-images-using-tensorflow

See also: