A shader can create vertex position depending on vertex indices, for example by calculating them from indices or by sampling them from a texture using indices as texture coordinates.
Blend shapes that run entirely on gpu for example. You would always pass the same vertex topology via indices but actual vertex coordinates for each id are read from a texture depending on current shape or mix of shapes. This would allow for easy blending of arbitrary number of shapes entirely in the shader. Sending specific vertex coordinates to the shader as standard position attributes would be redundant.
Another example might be plotting mathematical functions entirely from a shader. You send a list of sequential vertex ids as a part of a line strip primitive, a shader then interprets those as x coordinates, calculates the corresponding y coordinates and outputs the result as vertex positions. This would allow for animating the plots by changing a small number of shader parameters without ever needing to recalculate all of the vertex position data on the cpu.
In those examples you can always send some dummy positional data, but why bother if its never really needed.