I think for AI-generated websites, they tend to use a lot of emojis for almost everything. Also, they use colorful and bright colors. Other indication is using a single HTML file for a page, with embedded CSS and JS. In my opinion, most developers who wrotes the codes themselves most likely wouldn't heavily embed their CSS and JS into the HTML file because of readability.
Repeated code. Like SingleFormDialog() and SimpleDialog(). PositiveButton() and PrimaryButton(). Humans do this, but only when different people wrote the classes.
Tests that are not connected to anything. You'll have FooTest but it's all mocks, never connect to Foo. My favorite was something like Assert('this function is able to destroy a planet') which returns true as just because it asserted a string.
Comments as signboards where to go next.
Really specific comments like `Alice function has been moved to RepositoryAlice`. Especially things that nobody would ever ask about.
The code doesn't work in subtle ways, has low level of craft and slowly halts development speed over time because abstractions haven't been well thought of. It looks good on the surface if you doesn't care to spend energy investigating why it's not or don't have enough experience to see the sloppy work.
While I'm sure this question is being asked in good faith, and this site is certainly the place for discussion regarding such matters, anyone replying might want to consider that you are assiting both sides in the development of this tech by pointing out it's identifying features. All clues to how LLMs generate and display output can then be better hidden from their next iteration.
I think for AI-generated websites, they tend to use a lot of emojis for almost everything. Also, they use colorful and bright colors. Other indication is using a single HTML file for a page, with embedded CSS and JS. In my opinion, most developers who wrotes the codes themselves most likely wouldn't heavily embed their CSS and JS into the HTML file because of readability.
Repeated code. Like SingleFormDialog() and SimpleDialog(). PositiveButton() and PrimaryButton(). Humans do this, but only when different people wrote the classes.
Tests that are not connected to anything. You'll have FooTest but it's all mocks, never connect to Foo. My favorite was something like Assert('this function is able to destroy a planet') which returns true as just because it asserted a string.
Comments as signboards where to go next.
Really specific comments like `Alice function has been moved to RepositoryAlice`. Especially things that nobody would ever ask about.
The code doesn't work in subtle ways, has low level of craft and slowly halts development speed over time because abstractions haven't been well thought of. It looks good on the surface if you doesn't care to spend energy investigating why it's not or don't have enough experience to see the sloppy work.
- Needless guards, e.g. `if (document) document.body…` in the browser.
- Backwards compatibility, "This way handles version X, which end-of-life was 10 years ago."
- Unit tests with too much overlap. "Should add positive nums, Adds neg nums, Adds zero, …"
Lots of doc strings and lots of functions.
Two methods that could have been one method. Or one method that could have been split into multiple.
I’ve noticed that AI is really terrible at following instructions sometimes. Either it takes it too literally, or it complete ignores it.
// Comments with first letter capitalized detailing something very obvious on each line.
While I'm sure this question is being asked in good faith, and this site is certainly the place for discussion regarding such matters, anyone replying might want to consider that you are assiting both sides in the development of this tech by pointing out it's identifying features. All clues to how LLMs generate and display output can then be better hidden from their next iteration.
Think twice; don't feed the beast.
> what are some clues that code is AI generated?
Who gives a shit?
If it works it works.