Midjourney's Amazing New Feature PLUS: StableVideo 1.1 from Stablity.AI!

Theoretically Media
1 Feb 202410:54

TLDRThe video discusses a mid-journey update on style consistency in AI image generation, introducing a new feature that blends image prompting with style tuning. It explores the use of style references and demonstrates how to utilize the MID Journey Alpha website. The video also delves into Stable Video's platform for video diffusion, showcasing its capabilities and potential. The content highlights the creative possibilities and current limitations of these AI tools, emphasizing their potential for future development.

Takeaways

  • 🚀 Introduction of a mid-journey update focusing on style consistency in AI-generated images.
  • 🎨 Utilization of style references as the first step in the new consistent styles algorithm, akin to image prompting combined with style tuning.
  • 🔗 The process involves using image URLs with prompts to create a new style, demonstrated through the Mid Journey Alpha website.
  • 📈 Accessibility to the Mid Journey Alpha website is limited, but expanding, with users who have generated a significant number of images given priority.
  • 🌐 Commands issued are available on Discord for wider access.
  • 🖼️ The ability to drag and drop an image for immediate style referencing, resulting in outputs heavily influenced by the reference image.
  • 📸 Experimentation with blending two different images as style references, leading to unique and warmer styled images.
  • 🔄 Control over the influence of each image URL through wait commands, allowing for a balance between the styles.
  • 📚 Information on the process and commands is available as a free PDF on gumroad, with donations appreciated.
  • 🔄 Limitations of the feature include the inability to create consistent characters and potential temperamental results when pushing the boundaries.
  • 🎥 Stability.A's platform for stable video diffusion is in beta, with early access granted to those who signed up early.
  • 🎞️ Options for creating videos from either images or text prompts, with basic camera motion controls and experimental features.
  • 🌐 The potential for Stable Video to become a powerful tool in the creative AI space, with continued development expected.

Q & A

  • What is the main focus of the mid Journey update discussed in the transcript?

    -The main focus of the mid Journey update is style consistency, specifically the introduction of a new feature that blends image prompting with style tuning to create a new style based on image URLs or multiple image URLs provided alongside a prompt.

  • How does the new style reference feature work in mid Journey?

    -The style reference feature works by issuing the '--s ref' command along with the image URL that is being referenced. This allows the user to create an image in a style that is influenced by the provided reference image, essentially combining the concept from the prompt with the visual style of the reference image.

  • What is the current access status for the new mid Journey Alpha website?

    -Access to the new mid Journey Alpha website has been opened to users who have generated more than 5,000 images. Users who have generated 1,000 images are expected to gain access soon.

  • How can the influence of each image URL in style referencing be controlled?

    -The influence of each image URL in style referencing can be controlled by using the 'wait' command, which allows the user to adjust the intensity of the style influence, with options ranging from 1 to 1,000.

  • What limitations were mentioned about the new style consistency feature?

    -The new style consistency feature does not yet support consistent characters, and it can become temperamental when pushed too far, especially in the alpha phase. Additionally, combining three style references without a thematic connection can result in weird or bland outcomes.

  • What is the current status of stable video from stability.a?

    -Stable video from stability.a is currently in beta and is available for free during this period. It offers an open-source platform for stable video diffusion 1.1.

  • What are the two starting options for creating a video with stable video?

    -With stable video, users can start with either an image or a text prompt to generate videos.

  • What camera motion options are available in stable video?

    -In stable video, users can lock the camera, shake the camera, tilt it down, perform an orbit, a pan, and zoom in and out. There is also an experimental camera motion feature that is yet to be fully explored.

  • How does the voting system work in stable video?

    -After hitting generate, users can vote on which of the generations from other users they think looks good. This is a community-driven feature that allows users to contribute to the selection of the best outcomes.

  • What are the different style options available for text video generation in stable video?

    -For text video generation, stable video offers options for three different aspect ratios and a variety of styles to choose from, such as digital art, among others.

  • What is the speaker's overall impression of the creative AI space?

    -The speaker is very excited and impressed with the advancements in the creative AI space. They are particularly enthusiastic about the new features and improvements in mid Journey and stable video, and they eagerly anticipate the developments that will come in the near future.

Outlines

00:00

🎨 Introducing Mid Journey's Style Consistency Feature

The paragraph discusses the introduction of a new feature in Mid Journey's update focused on style consistency. It explains how the feature works by using image URLs with prompts to create a new style, similar to image prompting combined with style tuning. The speaker demonstrates the process using the Mid Journey Alpha website and explains that access to this feature is granted based on the number of images generated by users. The summary also touches on the differences between style referencing and image referencing, and how the feature can be influenced by multiple images and controlled via commands. The paragraph concludes with information on where to find more details in a free PDF format.

05:01

🚀 Exploring the Limitations and Potential of Style References

This paragraph delves into the capabilities and limitations of the style consistency feature. It highlights the creation of new styles using the feature and explores the concept of using multiple style references to influence the generated images. The speaker experiments with combining different images and prompts, such as 'cyberpunk woman' and 'dog samurai,' to create unique blends. The limitations are discussed, particularly the challenges of maintaining consistent characters and the varying results when using three style references. The paragraph also mentions the potential of increasing the strength of style references with the --ssw command and concludes with a link to a free PDF on gumroad for more information.

10:02

📹 Stability.A's Beta Launch and Video Diffusion Features

The final paragraph shifts focus to Stability.A's beta launch and its platform for stable video diffusion. It outlines the two starting points for video creation: image or text prompt. Despite some missing features due to the beta status, the speaker expresses excitement over the available options, such as camera lock, shake, tilt, orbit, pan, and zoom. The speaker also discusses the experimental features and the community-driven voting system for generations. Examples of generated videos, including a pirate ship and a character from a crime film, are provided to illustrate the capabilities of stable diffusion video. The paragraph concludes with an encouragement for viewers to sign up for the beta and an anticipation for the developments in the creative AI space.

Mindmap

Keywords

💡Mid Journey Update

The 'Mid Journey Update' refers to a significant upgrade in the AI-based image generation platform, Mid Journey. This update introduces a new feature focused on style consistency, allowing users to create images with a more cohesive and uniform aesthetic. It is a pivotal development in the platform's evolution, as it enhances the user experience by providing more control over the visual output.

💡Style Consistency

Style consistency is the concept of maintaining a uniform and recognizable visual style across multiple images or pieces of content. In the context of the video, it refers to the new Mid Journey feature that enables users to generate images with a consistent style, either by using a single reference image or by blending multiple images to create a new, cohesive style.

💡Stable Video

Stable Video refers to a technology or platform that focuses on generating video content with a stable and consistent output, ensuring that the generated videos have a high level of quality and coherence. In the video script, it is mentioned as a new platform by the 'OG Godfathers of stable diffusion,' indicating that it is a significant development in the field of AI-generated video content.

💡Style References

Style references are a method used in AI image generation where specific images are utilized as a basis for the style or aesthetic of the newly generated content. This concept is central to the Mid Journey update discussed in the video, allowing users to direct the visual output towards a particular look or feel by providing an image or multiple images as a reference.

💡Image Prompting

Image prompting is the process of generating an image based on a textual description or 'prompt' provided by the user. This technique is integral to AI image generation platforms, where the user inputs a textual description, and the AI creates an image that corresponds to the prompt. In the video, image prompting is discussed in relation to how it is combined with style references to create new styles.

💡Style Tuning

Style tuning is the process of adjusting or modifying the stylistic elements of an AI-generated image to achieve a particular aesthetic or visual effect. This concept is closely related to style consistency and is used in the context of the video to describe how users can fine-tune the style of their generated images by using style references and other commands.

💡Consistent Characters

Consistent characters refer to the ability to generate images of characters that maintain their identity and appearance across multiple generations or iterations. This is an important aspect of character design and storytelling, ensuring that a character is recognizable and remains true to their established visual identity. The video script mentions that Mid Journey is working on this feature, indicating an improvement in the platform's capabilities.

💡Dash Commands

Dash commands are specific instructions or parameters used in AI image generation platforms to control and customize the output. These commands typically begin with a dash (-) symbol and can influence various aspects of the generated content, such as style, color, or detail. In the context of the video, dash commands are used to enhance the style referencing feature and to control the influence of style reference images.

💡Gumroad

Gumroad is an online platform that allows creators to sell their work directly to consumers, often in the form of digital products like ebooks, software, or other downloadable content. In the video script, Gumroad is mentioned as the place where users can find a free PDF with information related to the Mid Journey update and style referencing, highlighting its role as a distribution channel for complementary materials.

💡AI-generated Content

AI-generated content refers to any form of media or material, such as images, videos, or text, that is created with the assistance of artificial intelligence algorithms. This content is produced without direct human creation, instead relying on AI systems to generate it based on input parameters or prompts provided by users. The video script discusses advancements in AI-generated content, particularly in the realm of image and video generation.

Highlights

Introduction of a mid-journey update focusing on style consistency.

Exploration of a new feature that combines image prompting with style tuning.

Use of image URLs with prompts to create a new style.

Access to the new Mid Journey Alpha website for users who have generated a significant number of images.

The --s ref command allows referencing an image to create a style.

Demonstration of how the style reference feature works with a basic prompt and an image URL.

Difference between style referencing and simple image referencing.

Ability to drag an image into the workspace for immediate style referencing.

Influence of a reference image on the generated content, such as changing the character's ethnicity.

Combining two different images as style references to create a blended style.

Control over the influence of each image URL through the use of wait commands.

Availability of comprehensive information as a free PDF on gumroad.

Discussion on the limitations of the style reference feature with three images and thematic coherence.

Introduction to stability.a's platform for stable video diffusion 1.1.

Options to start with an image or text prompt for stable video.

Features and camera motions available in stable video, including camera lock, shake, tilt, orbit, pan, and zoom.

The experimental nature of the platform and its potential for future development.

Impressive results from stable video, including character and object generation with camera motion.

The free beta period for stable video and the encouragement for users to sign up.

Excitement for the ongoing advancements in the creative AI space.