I think the more logical concern is that of poorly-configured or out-of-date models. We know that buggy software can cause all kinds of problems, and we know that some of them are obvious (crashes, graphical glitches) and others can be very subtle yet dangerous (incorrect output that can have catastrophic long-term effects).
Slim mentioned that AI is already doing a good job with human hands when it generates art. This sort of relatively obvious change will be a good way to see who is using outdated AI to generate art. In time, we may be able to tell which version of AI they are using by the quality of the work. In other fields, using outdated or poorly-designed AI can create problems that we don't see right away, but that can be significant nonetheless.
As with just about every human discovery and invention, we are likely to learn a few lessons the hard way.