Midjourney's Amazing New Feature PLUS: StableVideo 1.1 from Stablity.AI!
TLDRThe video discusses a mid-journey update on style consistency in AI image generation, introducing a new feature that blends image prompting with style tuning. It explores the use of style references and demonstrates how to utilize the MID Journey Alpha website. The video also delves into Stable Video's platform for video diffusion, showcasing its capabilities and potential. The content highlights the creative possibilities and current limitations of these AI tools, emphasizing their potential for future development.
Takeaways
- 🚀 Introduction of a mid-journey update focusing on style consistency in AI-generated images.
- 🎨 Utilization of style references as the first step in the new consistent styles algorithm, akin to image prompting combined with style tuning.
- 🔗 The process involves using image URLs with prompts to create a new style, demonstrated through the Mid Journey Alpha website.
- 📈 Accessibility to the Mid Journey Alpha website is limited, but expanding, with users who have generated a significant number of images given priority.
- 🌐 Commands issued are available on Discord for wider access.
- 🖼️ The ability to drag and drop an image for immediate style referencing, resulting in outputs heavily influenced by the reference image.
- 📸 Experimentation with blending two different images as style references, leading to unique and warmer styled images.
- 🔄 Control over the influence of each image URL through wait commands, allowing for a balance between the styles.
- 📚 Information on the process and commands is available as a free PDF on gumroad, with donations appreciated.
- 🔄 Limitations of the feature include the inability to create consistent characters and potential temperamental results when pushing the boundaries.
- 🎥 Stability.A's platform for stable video diffusion is in beta, with early access granted to those who signed up early.
- 🎞️ Options for creating videos from either images or text prompts, with basic camera motion controls and experimental features.
- 🌐 The potential for Stable Video to become a powerful tool in the creative AI space, with continued development expected.
Q & A
What is the main focus of the mid Journey update discussed in the transcript?
-The main focus of the mid Journey update is style consistency, specifically the introduction of a new feature that blends image prompting with style tuning to create a new style based on image URLs or multiple image URLs provided alongside a prompt.
How does the new style reference feature work in mid Journey?
-The style reference feature works by issuing the '--s ref' command along with the image URL that is being referenced. This allows the user to create an image in a style that is influenced by the provided reference image, essentially combining the concept from the prompt with the visual style of the reference image.
What is the current access status for the new mid Journey Alpha website?
-Access to the new mid Journey Alpha website has been opened to users who have generated more than 5,000 images. Users who have generated 1,000 images are expected to gain access soon.
How can the influence of each image URL in style referencing be controlled?
-The influence of each image URL in style referencing can be controlled by using the 'wait' command, which allows the user to adjust the intensity of the style influence, with options ranging from 1 to 1,000.
What limitations were mentioned about the new style consistency feature?
-The new style consistency feature does not yet support consistent characters, and it can become temperamental when pushed too far, especially in the alpha phase. Additionally, combining three style references without a thematic connection can result in weird or bland outcomes.
What is the current status of stable video from stability.a?
-Stable video from stability.a is currently in beta and is available for free during this period. It offers an open-source platform for stable video diffusion 1.1.
What are the two starting options for creating a video with stable video?
-With stable video, users can start with either an image or a text prompt to generate videos.
What camera motion options are available in stable video?
-In stable video, users can lock the camera, shake the camera, tilt it down, perform an orbit, a pan, and zoom in and out. There is also an experimental camera motion feature that is yet to be fully explored.
How does the voting system work in stable video?
-After hitting generate, users can vote on which of the generations from other users they think looks good. This is a community-driven feature that allows users to contribute to the selection of the best outcomes.
What are the different style options available for text video generation in stable video?
-For text video generation, stable video offers options for three different aspect ratios and a variety of styles to choose from, such as digital art, among others.
What is the speaker's overall impression of the creative AI space?
-The speaker is very excited and impressed with the advancements in the creative AI space. They are particularly enthusiastic about the new features and improvements in mid Journey and stable video, and they eagerly anticipate the developments that will come in the near future.
Outlines
🎨 Introducing Mid Journey's Style Consistency Feature
The paragraph discusses the introduction of a new feature in Mid Journey's update focused on style consistency. It explains how the feature works by using image URLs with prompts to create a new style, similar to image prompting combined with style tuning. The speaker demonstrates the process using the Mid Journey Alpha website and explains that access to this feature is granted based on the number of images generated by users. The summary also touches on the differences between style referencing and image referencing, and how the feature can be influenced by multiple images and controlled via commands. The paragraph concludes with information on where to find more details in a free PDF format.
🚀 Exploring the Limitations and Potential of Style References
This paragraph delves into the capabilities and limitations of the style consistency feature. It highlights the creation of new styles using the feature and explores the concept of using multiple style references to influence the generated images. The speaker experiments with combining different images and prompts, such as 'cyberpunk woman' and 'dog samurai,' to create unique blends. The limitations are discussed, particularly the challenges of maintaining consistent characters and the varying results when using three style references. The paragraph also mentions the potential of increasing the strength of style references with the --ssw command and concludes with a link to a free PDF on gumroad for more information.
📹 Stability.A's Beta Launch and Video Diffusion Features
The final paragraph shifts focus to Stability.A's beta launch and its platform for stable video diffusion. It outlines the two starting points for video creation: image or text prompt. Despite some missing features due to the beta status, the speaker expresses excitement over the available options, such as camera lock, shake, tilt, orbit, pan, and zoom. The speaker also discusses the experimental features and the community-driven voting system for generations. Examples of generated videos, including a pirate ship and a character from a crime film, are provided to illustrate the capabilities of stable diffusion video. The paragraph concludes with an encouragement for viewers to sign up for the beta and an anticipation for the developments in the creative AI space.
Mindmap
Keywords
💡Mid Journey Update
💡Style Consistency
💡Stable Video
💡Style References
💡Image Prompting
💡Style Tuning
💡Consistent Characters
💡Dash Commands
💡Gumroad
💡AI-generated Content
Highlights
Introduction of a mid-journey update focusing on style consistency.
Exploration of a new feature that combines image prompting with style tuning.
Use of image URLs with prompts to create a new style.
Access to the new Mid Journey Alpha website for users who have generated a significant number of images.
The --s ref command allows referencing an image to create a style.
Demonstration of how the style reference feature works with a basic prompt and an image URL.
Difference between style referencing and simple image referencing.
Ability to drag an image into the workspace for immediate style referencing.
Influence of a reference image on the generated content, such as changing the character's ethnicity.
Combining two different images as style references to create a blended style.
Control over the influence of each image URL through the use of wait commands.
Availability of comprehensive information as a free PDF on gumroad.
Discussion on the limitations of the style reference feature with three images and thematic coherence.
Introduction to stability.a's platform for stable video diffusion 1.1.
Options to start with an image or text prompt for stable video.
Features and camera motions available in stable video, including camera lock, shake, tilt, orbit, pan, and zoom.
The experimental nature of the platform and its potential for future development.
Impressive results from stable video, including character and object generation with camera motion.
The free beta period for stable video and the encouragement for users to sign up.
Excitement for the ongoing advancements in the creative AI space.