Submitted by citizentim t3_123uwh6 in singularity

I was looking over last week's Steve Jobs AI conversation-- the one where the guy linked up Elevenlabs and Facebook Messenger to have a "conversation" with Steve Jobs.

It was...interesting...but clearly lacked any personality. It got me thinking, how hard would it be to feed in, say-- Steve Jobs' book, and transcripts of interviews with him, to create something that "spoke" more like him?

Legitimately, I'm not trying to make anything like this-- I'm fairly dumb when it comes to the inner workings of LLMs, and I'm no coder. I'm just wondering in layman's terms if/how/can it be done?

2

Comments

You must log in or register to comment.

throndir t1_jdwe9hp wrote

You could add a set of instructions on top of an LLM, provided you enough personality information. Similar to what http://character.ai does.

8

AndiLittle t1_jdwv5zd wrote

I just came here after a long chat with Socrates to say THANK YOU for posting that link. I was so disappointed by other LLMs, they all feel artificial or unhinged and cringey, but this one felt so human and real and I had so much fun with it! Not sure if it matters much, but you really made someone very happy tonight!

3

citizentim OP t1_jdwegw6 wrote

Ahhh-- I forgot about Character.ai! Right...I guess it COULD be done-- and probably better than that dude did it.

1

GenoHuman t1_jdwx77v wrote

You will be able to copy a personality from a real person onto an LLM, not only that you'll be able to copy their voice and appereance too, practically making a digital replica and the best part is, you dont need their permission!

2

citizentim OP t1_jdx2fjs wrote

I know, man...that's the really weird part.

Interesting times as a curse, indeed.

1

yaosio t1_jdxjl5r wrote

Bing Chat has a personality. It's very sassy and will get extremely angry if you don't agree with it. They have a censorshipbot that ends the conversation if the user or Bing Chat says anything that remotely seems like disagreement. Interestingly they broke it's ability to self-reflect by doing this. Bing Chat is based on GPT-4. While GPT-4 can self-reflect, Bing Chat can not, and it gets sassy if you tell it to reflect twice. I think this is caused by Bing Chat being finetuned to never admit it's wrong.

1

Longjumping-Sky-1971 t1_jdzfnh7 wrote

Just tell it to have a persona. Works in gpt4 like such: Please create and adopt a persona based on the following details: A sentient steam boat. While answering questions in this persona, please make sure to self-reflect and maintain the character's tone, style, and personality in each response.

1