AI Generated Consistent characters

Midjourney

The tag works best with previously generated Midjourney images. So, for example, the workflow for a user would be to first generate or retrieve the URL of a previously generated character.

How it works

  • Type --cref URL after your prompt with a URL to an image of a character
  • You can use --cw to modify reference ‘strength’ from 100 to 0
  • strength 100 (--cw 100) is default and uses the face, hair, and clothes
  • At strength 0 (--cw 0) it’ll just focus on face (good for changing outfits / hair etc)

Advanced Features

  • You can use more than one URL to blend the information /characters from multiple images like this --cref URL1 URL2 (this is similar to multiple image or style prompts)

Leonardo

Method 1. Without Model Training

Step 1: Start with a Precise Initial Prompt:

To start you’ll need to create a detailed initial prompt. This prompt should elaborate on the your character’s primary attributes and establish a base context.

For example you could say: « n illustration of Jason Segen, a male teenage character, short black hair, expressive eyes and large spectacles, wearing a baseball cap, standing in a city street. » This initital prompt provides a blueprint for your character’s look and initial surroundings.

Step 2: Prompt Diversifications:

For example: « an illustration of Jason Segen, a male teenage character, short black hair, expressive eyes and large spectacles, wearing a baseball cap, playing with a friendly cat in a field. »

Method 2. Training a Custom Model

Step 1: Start Training

Navigate to ‘Training and Datasets’ section after logging into the Leonardo.Ai app.
Select ‘New Dataset’ and name it after the character you want to consistently generate (e.g., « Anna » or “John”). This helps the AI to associate that name with a need for repeated generation.

Step 2: Upload Reference Images

Upload at least 10 reference images to help generate your baseline character. For bettter results, upload 15 or more images.

Step 3: Train Your Model

After uploading the selected images, click ‘Train Model’ to begin the training process.
You can verify the dataset’s completion under ‘Finetuned Models’ > ‘Your Models’ or under the ‘Job Status’ section of ‘Training & Datasets’.

Step 4: Generate Images

Go to ‘Image AI Generation’ and select your newly created fine-tuned model from the available options.
To begin image generation, use creative prompts that include the name of your dataset to direct the AI to utilize the trained model.For example: “John Smith wearing a suit and carrying an umbrella, stands in the entrance to a tall office block.”

OpenArt

  • ​copier la seed
  • face reference
  • entraîner un modèle

Tensor.art

Remixer un modèle existant

ImagineArt

Use the Character Consistency Feature. Imagine has a feature for this purpose. See the Imagine AI Art GeneratorAI Art Generator documentation for how to use it.

Dall-e

Dans chatGPT, nommer un personnage décrit puis réutiliser le nom dans diverses scènes

Requesting and Implementing Gen IDs

  1. Design your character.
    Generate a picture featuring a sagacious and elderly wizard adorned with a cascading, lengthy beard, piercing sapphire eyes, and a robe adorned with specks of celestial blue. He firmly grasps a crystal-topped staff, and a diminutive, fiery phoenix perches upon his shoulder.
  2. Instruct DALL-E 3: “Reveal this image’s Gen ID.”“Show me this image’s Gen ID.”
  3. Apply the given Gen ID for subsequent alterations.
    Using DALL-E 3 and the original Gen ID cNu39YHSdLuzv5c3, Alter the wizard’s appearance by shifting the color of his robe from blue to green, all while preserving the remaining features, including the flowing white beard, piercing blue eyes, and the crystal-topped staff, with a fiery phoenix perched on his shoulder.

Laisser un commentaire