Today In class, Clio and I experimented with creating different prompts for two different large language models: ChatGPT and Google AI’s Gemini. We wanted to use these two as they are two of the most popular AI LLM companies. We did two separate ones for the sake of comparison.
PartI:
I experimented with giving It the persona of having the voice/style of Sylvia Plath and writing a poem about a relationship. We found that Clio’s Gemini output was not only generic and not very ‘good’ but it didn’t really sound like Plath. She for one, does not rhyme in her works, however Gemini outputted a completely rhyme-bound piece.
My ChatGPT out put however was more promising, with words, phrasing, and style much more similar to Plath’s works. It still had flaws in genericness and some inaccuracies in how she would write, and some of the themes/flow did not make much sense and sounded like it was “fake deep/sad,” but it was significantly better. We think this is because I used the ChatGPT customization tool in which I gave it specific personality traits like being “poetic.”
Part II:
We then experimented with asking it to write short stories and song verses.
For the short stories we asked it to write “a short story about the feeling of summer.” For the stories we again felt that my ChatGPT output was a much ‘better’ story while the Gemini one was a little odd and bland. It was interesting to see that mine was extremely romantic, soft, deep, and nice sounding.
Hi Breena,
I think it’s interesting to see that, for most people, ChatGPT produced the best outputs with regard to creative writing. I also feel like LLMs tend to rhyme as soon as you ask them to write some kind of poem, that’s probably one example of overgeneralization of a pattern.