Binance Square

Hypemoon

Your portal into Web3 culture. Powered by HYPEBEAST.
0 تتابع
1.1K+ المتابعون
20.6K+ إعجاب
1.8K+ تمّت مُشاركتها
منشورات
·
--
تقدم شركة Lowe’s المتخصصة في تحسين المنازل الآن علم حديقة يحمل اسم "MFers"في يونيو/حزيران الماضي، عندما اتخذت شركة تحسين المنازل Lowe's قرارًا بالدخول في Web3، جاء تركيزها على دعم نمو مبيعات الذروة أثناء الوباء في وقت بدأ فيه الطلب على سلع تحسين المنازل يستقر بعد تفشي COVID-19 العالمي. وقد تم تحقيق هذه الغزوة إلى عالم الميتافيرس بعد خمسة أشهر عندما أطلقت شركة التجزئة أداة Measure Your Space، والتي تتيح للمتسوقين في Lowe's قياس وتنظيم مساحاتهم من خلال تطبيقها المحمول. تم تسويق مجموعة NFT الخاصة بها، والتي ظهرت لأول مرة أيضًا في يونيو الماضي، للمطورين الذين أرادوا تصور مساحات عملهم افتراضيًا. سمحت مجموعة الأصول الرقمية ثلاثية الأبعاد المجانية التي تضم 500 أصل للعملاء بتنزيلها واستخدامها في مركز ميتافيرس الخاص بـ Lowe، Open Builder.

تقدم شركة Lowe’s المتخصصة في تحسين المنازل الآن علم حديقة يحمل اسم "MFers"

في يونيو/حزيران الماضي، عندما اتخذت شركة تحسين المنازل Lowe's قرارًا بالدخول في Web3، جاء تركيزها على دعم نمو مبيعات الذروة أثناء الوباء في وقت بدأ فيه الطلب على سلع تحسين المنازل يستقر بعد تفشي COVID-19 العالمي.

وقد تم تحقيق هذه الغزوة إلى عالم الميتافيرس بعد خمسة أشهر عندما أطلقت شركة التجزئة أداة Measure Your Space، والتي تتيح للمتسوقين في Lowe's قياس وتنظيم مساحاتهم من خلال تطبيقها المحمول.

تم تسويق مجموعة NFT الخاصة بها، والتي ظهرت لأول مرة أيضًا في يونيو الماضي، للمطورين الذين أرادوا تصور مساحات عملهم افتراضيًا. سمحت مجموعة الأصول الرقمية ثلاثية الأبعاد المجانية التي تضم 500 أصل للعملاء بتنزيلها واستخدامها في مركز ميتافيرس الخاص بـ Lowe، Open Builder.
بالنسبة لمدير العمليات السابق لشركة TikTok، يبدو أن التكنولوجيا الناشئة والبلوكشين هما الطريق إلى الأماممع استمرار مستقبل TikTok في الغموض، قدمت التقارير يوم الخميس تحديًا كبيرًا آخر للشركة بعد أن أعلنت مديرة العمليات في منصة التواصل الاجتماعي، فانيسا "V" باباس، استقالتها للموظفين في مذكرة داخلية تمت مشاركتها لاحقًا على تويتر. إليكم الملاحظة التي أرسلتها إلى جميع موظفي TikTok هذا الصباح pic.twitter.com/4iB9Ph7b6q — V Pappas (@v_ness) 22 يونيو 2023 "نظرًا لجميع النجاحات التي حققتها في TikTok، أشعر أخيرًا أن الوقت مناسب للمضي قدمًا وإعادة التركيز على شغفي الريادي. لم يتخيل سوى القليل كيف ستبدو السنوات الخمس الماضية ومع كل الابتكارات المذهلة التي تحدث الآن مع الذكاء الاصطناعي التوليدي والروبوتات والطاقة المتجددة وعلم الجينوم والبلوك تشين وإنترنت الأشياء، من الواضح أن المستقبل سيبدو مختلفًا كثيرًا مرة أخرى،" كتب باباس.

بالنسبة لمدير العمليات السابق لشركة TikTok، يبدو أن التكنولوجيا الناشئة والبلوكشين هما الطريق إلى الأمام

مع استمرار مستقبل TikTok في الغموض، قدمت التقارير يوم الخميس تحديًا كبيرًا آخر للشركة بعد أن أعلنت مديرة العمليات في منصة التواصل الاجتماعي، فانيسا "V" باباس، استقالتها للموظفين في مذكرة داخلية تمت مشاركتها لاحقًا على تويتر.

إليكم الملاحظة التي أرسلتها إلى جميع موظفي TikTok هذا الصباح pic.twitter.com/4iB9Ph7b6q

— V Pappas (@v_ness) 22 يونيو 2023

"نظرًا لجميع النجاحات التي حققتها في TikTok، أشعر أخيرًا أن الوقت مناسب للمضي قدمًا وإعادة التركيز على شغفي الريادي. لم يتخيل سوى القليل كيف ستبدو السنوات الخمس الماضية ومع كل الابتكارات المذهلة التي تحدث الآن مع الذكاء الاصطناعي التوليدي والروبوتات والطاقة المتجددة وعلم الجينوم والبلوك تشين وإنترنت الأشياء، من الواضح أن المستقبل سيبدو مختلفًا كثيرًا مرة أخرى،" كتب باباس.
لعبة Gods Unchained الرائدة على Web3، تُطرح على متجر Epic Gamesحققت لعبة Gods Unchained، لعبة بطاقات التداول الناجحة للغاية (TCG) المبنية على blockchain Ethereum، إنجازًا مهمًا من خلال إطلاقها على متجر Epic Games Store، إحدى أكبر منصات التوزيع الرقمية لألعاب الكمبيوتر. قال دانييل بايز، المنتج التنفيذي للعبة Gods Unchained: "من الصعب المبالغة في أهمية إطلاق Gods Unchained على متجر Epic Games Store، إحدى أكبر منصات ألعاب الكمبيوتر في العالم". إن توفر لعبة Gods Unchained على متجر Epic Games Store يفتح المجال أمام جمهور واسع يضم أكثر من 230 مليون لاعب كمبيوتر حول العالم. ولا شك أن سهولة الوصول والتعرض الجديدة هذه ستعزز من ظهور اللعبة وتجذب قاعدة متنوعة من اللاعبين من لاعبي الكمبيوتر التقليديين وعشاق ألعاب TCG. ويمثل هذا تقدمًا طبيعيًا للعبة Gods Unchained، حيث تفي بالوعد الذي قطعته لمجتمعها بتوسيع نطاقها وجاذبيتها.

لعبة Gods Unchained الرائدة على Web3، تُطرح على متجر Epic Games

حققت لعبة Gods Unchained، لعبة بطاقات التداول الناجحة للغاية (TCG) المبنية على blockchain Ethereum، إنجازًا مهمًا من خلال إطلاقها على متجر Epic Games Store، إحدى أكبر منصات التوزيع الرقمية لألعاب الكمبيوتر.

قال دانييل بايز، المنتج التنفيذي للعبة Gods Unchained: "من الصعب المبالغة في أهمية إطلاق Gods Unchained على متجر Epic Games Store، إحدى أكبر منصات ألعاب الكمبيوتر في العالم".

إن توفر لعبة Gods Unchained على متجر Epic Games Store يفتح المجال أمام جمهور واسع يضم أكثر من 230 مليون لاعب كمبيوتر حول العالم. ولا شك أن سهولة الوصول والتعرض الجديدة هذه ستعزز من ظهور اللعبة وتجذب قاعدة متنوعة من اللاعبين من لاعبي الكمبيوتر التقليديين وعشاق ألعاب TCG. ويمثل هذا تقدمًا طبيعيًا للعبة Gods Unchained، حيث تفي بالوعد الذي قطعته لمجتمعها بتوسيع نطاقها وجاذبيتها.
تدعو شركة Roblox اللاعبين إلى بناء تجارب ناضجة مخصصة للمستخدمين الذين تزيد أعمارهم عن 17 عامًاأعلنت شركة روبلوكس، منصة الألعاب الإلكترونية الشهيرة، مؤخرًا عن إعلان مثير يهدف إلى توسيع آفاقها وتلبية احتياجات جمهور أكثر نضجًا. وفي خطوة تشير إلى التزام المنصة بالشمول والمحتوى المتنوع، دعت روبلوكس مجتمعها الإبداعي إلى بناء ومشاركة تجارب ناضجة مصممة خصيصًا للمستخدمين الذين تبلغ أعمارهم 17 عامًا أو أكثر. تمثل هذه المبادرة الجديدة خطوة مهمة إلى الأمام بالنسبة لشركة Roblox، حيث إنها تعترف بقاعدة مستخدمي المنصة المتنامية وتفضيلاتهم المتطورة. ومن خلال تبني مجموعة أوسع من المحتوى، تعمل Roblox بنشاط على تعزيز بيئة تلبي احتياجات ورغبات أعضاء مجتمعها المتنوعين، مما يضمن لهم العثور على تجارب جذابة تتوافق مع اهتماماتهم.

تدعو شركة Roblox اللاعبين إلى بناء تجارب ناضجة مخصصة للمستخدمين الذين تزيد أعمارهم عن 17 عامًا

أعلنت شركة روبلوكس، منصة الألعاب الإلكترونية الشهيرة، مؤخرًا عن إعلان مثير يهدف إلى توسيع آفاقها وتلبية احتياجات جمهور أكثر نضجًا. وفي خطوة تشير إلى التزام المنصة بالشمول والمحتوى المتنوع، دعت روبلوكس مجتمعها الإبداعي إلى بناء ومشاركة تجارب ناضجة مصممة خصيصًا للمستخدمين الذين تبلغ أعمارهم 17 عامًا أو أكثر.

تمثل هذه المبادرة الجديدة خطوة مهمة إلى الأمام بالنسبة لشركة Roblox، حيث إنها تعترف بقاعدة مستخدمي المنصة المتنامية وتفضيلاتهم المتطورة. ومن خلال تبني مجموعة أوسع من المحتوى، تعمل Roblox بنشاط على تعزيز بيئة تلبي احتياجات ورغبات أعضاء مجتمعها المتنوعين، مما يضمن لهم العثور على تجارب جذابة تتوافق مع اهتماماتهم.
أطلق Slim Jim نادي العضوية الرقمية الأول على الإطلاق "MEATAVERSE"تم ابتكار لحم البقر المجفف Slim Jim لأول مرة في أربعينيات القرن العشرين لتغذية رواد الحانات في ولاية بنسلفانيا، وظهر كبديل لتناول الببروني في الأماكن العامة، وأصبح في نهاية المطاف مرادفًا لثقافة الرحلات البرية والمفضل على مستوى البلاد في محطات الوقود في جميع أنحاء البلاد. بعد أكثر من 70 عامًا، لا يزال سليم جيم موجودًا ولا يزال مهمًا. أعلنت علامة الوجبات الخفيفة يوم الثلاثاء عن دخولها إلى Web3 من خلال الإعلان عن إطلاق "Meataverse" - أول نادي عضوية رقمي على الإطلاق وتجربة تجميعية رقمية مجانية لجميع الراغبين في المشاركة.

أطلق Slim Jim نادي العضوية الرقمية الأول على الإطلاق "MEATAVERSE"

تم ابتكار لحم البقر المجفف Slim Jim لأول مرة في أربعينيات القرن العشرين لتغذية رواد الحانات في ولاية بنسلفانيا، وظهر كبديل لتناول الببروني في الأماكن العامة، وأصبح في نهاية المطاف مرادفًا لثقافة الرحلات البرية والمفضل على مستوى البلاد في محطات الوقود في جميع أنحاء البلاد.

بعد أكثر من 70 عامًا، لا يزال سليم جيم موجودًا ولا يزال مهمًا.

أعلنت علامة الوجبات الخفيفة يوم الثلاثاء عن دخولها إلى Web3 من خلال الإعلان عن إطلاق "Meataverse" - أول نادي عضوية رقمي على الإطلاق وتجربة تجميعية رقمية مجانية لجميع الراغبين في المشاركة.
عرض الترجمة
OpenAI Now Has Its First Defamation Lawsuit After Spitting Out a Case That Made Up New Facts“While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.” – OpenAI’s opening disclaimer And that brings us to the heart of our biggest fear – what happens when technology turns against us?  What happens when technology is prematurely deployed without the proper testing and knowledge behind its capabilities? Earlier this month, OpenAI, the world’s most talked about artificial intelligence (AI) company, just got served with its first-ever defamation lawsuit that further showcases the dangers of ChatGPT’s unchecked ability to generate results that have no factual backing or legal backing. Mark Walters, a nationally syndicated radio host in Georgia, filed his lawsuit against OpenAI on June 5 alleging that its AI-powered chatbot, ChatGPT, fabricated legal claims against him.  The 13-page Complaint references AmmoLand.com journalist Fred Riehl and his May 4 request to ChatGPT to summarize the legal case of Second Amendment Foundation v. Ferguson, a federal case filed in Washington federal court that accused the state’s Attorney General Bob Ferguson of abusing his power by chilling the activities of the gun rights foundation and provided the OpenAI chatbot with a link to the lawsuit.  While Walter was not named in that original lawsuit, ChatGPT responded to Riehl’s summary request of Second Amendment Foundation, stating that it was: “...a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF.” But here’s where things get distorted and dangerous – none of ChatGPT’s statements concerning Walters are in the actual SAF complaint.  This AI-generated “complaint” also alleged that Walters, who served as the organization’s treasurer and chief financial officer, “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership.” As a form of relief, the plaintiff allegedly was seeking “the recovery of the misappropriated funds, damages for breach of fiduciary duty and fraud, and Walter’s removal from his position as a member of the SAF’s board of directors.” However, herein lies the problem – according to Walters, “[e]very statement of fact in the [ChatGPT] summary pertaining to [him] is false” where OpenAI’s chatbot went so far as to even create “an erroneous case number.” “ChatGPT’s allegations concerning Walters were false and malicious, expressed in print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule,” the lawsuit states. “By sending the allegations to Riehl, [OpenAI] published libelous matter regarding Walters.” If you were to ask ChatGPT to provide a summary of SAF’s lawsuit that was cited in Walters’ complaint, you may also get a response similar to this: “I apologize, but as an AI language model, my responses are based on pre-existing knowledge up until September 2021. Therefore, I cannot access or browse the internet or view specific documents or links that were published after my knowledge cutoff. Consequently, I’m unable to provide you with a summary of the accusations in the lawsuit you mentioned…[t]o get information about the lawsuit and its accusations, I recommend reviewing the document yourself or referring to trusted news sources or legal websites that may have covered the case. They can provide you with accurate and up-to-date information regarding the specific lawsuit you mentioned.” While OpenAI hasn’t responded to any comments on Walters’ ongoing defamation lawsuit, it does beg the question of why the AI company isn’t pressing harder on these arguably foreseeable consequences of a code that was in retrospect, negligently deployed without the proper testing. The case is Mark Walters v. OpenAI, LLC, cv-23-A-04860-2. You can read Walter’s June 5 complaint here.  In other news, read about US President Joe Biden meeting with 8 tech leaders in addressing AI bias and workforce benefits. Click here to view full gallery at Hypemoon

OpenAI Now Has Its First Defamation Lawsuit After Spitting Out a Case That Made Up New Facts

“While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”

– OpenAI’s opening disclaimer

And that brings us to the heart of our biggest fear – what happens when technology turns against us? 

What happens when technology is prematurely deployed without the proper testing and knowledge behind its capabilities?

Earlier this month, OpenAI, the world’s most talked about artificial intelligence (AI) company, just got served with its first-ever defamation lawsuit that further showcases the dangers of ChatGPT’s unchecked ability to generate results that have no factual backing or legal backing.

Mark Walters, a nationally syndicated radio host in Georgia, filed his lawsuit against OpenAI on June 5 alleging that its AI-powered chatbot, ChatGPT, fabricated legal claims against him. 

The 13-page Complaint references AmmoLand.com journalist Fred Riehl and his May 4 request to ChatGPT to summarize the legal case of Second Amendment Foundation v. Ferguson, a federal case filed in Washington federal court that accused the state’s Attorney General Bob Ferguson of abusing his power by chilling the activities of the gun rights foundation and provided the OpenAI chatbot with a link to the lawsuit. 

While Walter was not named in that original lawsuit, ChatGPT responded to Riehl’s summary request of Second Amendment Foundation, stating that it was:

“...a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF.”

But here’s where things get distorted and dangerous – none of ChatGPT’s statements concerning Walters are in the actual SAF complaint. 

This AI-generated “complaint” also alleged that Walters, who served as the organization’s treasurer and chief financial officer, “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership.”

As a form of relief, the plaintiff allegedly was seeking “the recovery of the misappropriated funds, damages for breach of fiduciary duty and fraud, and Walter’s removal from his position as a member of the SAF’s board of directors.”

However, herein lies the problem – according to Walters, “[e]very statement of fact in the [ChatGPT] summary pertaining to [him] is false” where OpenAI’s chatbot went so far as to even create “an erroneous case number.”

“ChatGPT’s allegations concerning Walters were false and malicious, expressed in print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule,” the lawsuit states. “By sending the allegations to Riehl, [OpenAI] published libelous matter regarding Walters.”

If you were to ask ChatGPT to provide a summary of SAF’s lawsuit that was cited in Walters’ complaint, you may also get a response similar to this:

“I apologize, but as an AI language model, my responses are based on pre-existing knowledge up until September 2021. Therefore, I cannot access or browse the internet or view specific documents or links that were published after my knowledge cutoff. Consequently, I’m unable to provide you with a summary of the accusations in the lawsuit you mentioned…[t]o get information about the lawsuit and its accusations, I recommend reviewing the document yourself or referring to trusted news sources or legal websites that may have covered the case. They can provide you with accurate and up-to-date information regarding the specific lawsuit you mentioned.”

While OpenAI hasn’t responded to any comments on Walters’ ongoing defamation lawsuit, it does beg the question of why the AI company isn’t pressing harder on these arguably foreseeable consequences of a code that was in retrospect, negligently deployed without the proper testing.

The case is Mark Walters v. OpenAI, LLC, cv-23-A-04860-2.

You can read Walter’s June 5 complaint here. 

In other news, read about US President Joe Biden meeting with 8 tech leaders in addressing AI bias and workforce benefits.

Click here to view full gallery at Hypemoon
عرض الترجمة
Fortnite and Nike’s Air Max IP Merge With New ‘Airphoria’ ExperienceBuilding upon its multi-year partnership with Epic Games, Nike announced the launch of its newest immersive gaming experience, Airphoria, on Tuesday that operates within Fortnite.  This collaboration merges Nike’s iconic Air Max brand and IP with Fortnite’s immersive world and story building, powered by Epic Games’ Unreal Editor for Fortnite.  “Enter the world of Airphoria, a beautiful and extraordinary fusion of cutting-edge design and unparalleled creativity,” the press release reads, adding that the new experience will enable players to engage with Air Max sneakers in a new way as sneakerheads embark on “the ultimate sneaker hunt.” “Airphoria represents a new, immersive experience for Nike as it amplifies its efforts in gaming and virtual products." said Ron Faris, VP/GM of Nike Virtual Studios. Faris says that this partnership is another opportunity for the brand to “seek authentic ways” to deepen its connection with fans and bring consumers into Nike’s digital ecosystem.  “What’s more, Nike is one of the first brands to use Epic Games’ Unreal Editor for Fortnite to build Airphoria, paving the way for a continued partnership that will further unlock the future of gaming,” he added.  The blood running through Airphoria’s ecosystem is fueled by five iconic Air Max Grails – Air Max 1 OG, Air Max 97, Air Max TW, Air Max Scorpion, and Air Max Pulse – al suspended in the air above the city.  These Air Max Grails, according to Tuesday’s announcement, represent pivotal moments in Air Max history, allowing the city to exist in the ‘Air State’ – or what Nike describes as “the purest form of imagination and creativity.” As for the experience itself, fans start their journey when “Maxxed out Max” dispatches his Sneaker Drones to seize the Air Max Grails from Airphoria, but Airie, the defender of the Air Max Grails, scatters the sneakers throughout Airphoria’s city down below, causing Airphoria to lose its power. It’s up to the players to successfully return the Air Max Grails back to their rightful place above the city.  As part of today’s launch, Fortnite players are able to purchase the Airie and Maxxed Out Max Outfits in Fornite’s Item Shop, with a limited Airphoria inspired collection also dropping on Nike.com.  From June 20 to June 27, players can access Airphoria island through Fortnite Discover or the island code 2118-5342-7190. “Fortnite continues to be a primary destination for new cultural moments in entertainment, fashion, and sport, and we’re proud to launch Airphoria alongside Nike,” says Nate Nanzer, VP of Global Partnerships at Epic Games. “We know from past activations with Nike that players love the partnership, and Airphoria’s immersion, beauty and storytelling about the Air Max brand take things to a new level. Airphoria demonstrates what’s possible when next-gen tools like UEFN are used by an iconic brand like Nike and creators in the Fortnite community to build immersive worlds together.” As for what’s happening in Nike’s direct ecosystem, its Web3 platform, .SWOOSH, is still in closed beta, but the public is encouraged to register to become a .SWOOSH member at welcome.swoosh.nike. In other news, read about LVMH and its partnership with Epic Games and Apple. Click here to view full gallery at Hypemoon

Fortnite and Nike’s Air Max IP Merge With New ‘Airphoria’ Experience

Building upon its multi-year partnership with Epic Games, Nike announced the launch of its newest immersive gaming experience, Airphoria, on Tuesday that operates within Fortnite. 

This collaboration merges Nike’s iconic Air Max brand and IP with Fortnite’s immersive world and story building, powered by Epic Games’ Unreal Editor for Fortnite. 

“Enter the world of Airphoria, a beautiful and extraordinary fusion of cutting-edge design and unparalleled creativity,” the press release reads, adding that the new experience will enable players to engage with Air Max sneakers in a new way as sneakerheads embark on “the ultimate sneaker hunt.”

“Airphoria represents a new, immersive experience for Nike as it amplifies its efforts in gaming and virtual products." said Ron Faris, VP/GM of Nike Virtual Studios. Faris says that this partnership is another opportunity for the brand to “seek authentic ways” to deepen its connection with fans and bring consumers into Nike’s digital ecosystem. 

“What’s more, Nike is one of the first brands to use Epic Games’ Unreal Editor for Fortnite to build Airphoria, paving the way for a continued partnership that will further unlock the future of gaming,” he added. 

The blood running through Airphoria’s ecosystem is fueled by five iconic Air Max Grails – Air Max 1 OG, Air Max 97, Air Max TW, Air Max Scorpion, and Air Max Pulse – al suspended in the air above the city. 

These Air Max Grails, according to Tuesday’s announcement, represent pivotal moments in Air Max history, allowing the city to exist in the ‘Air State’ – or what Nike describes as “the purest form of imagination and creativity.”

As for the experience itself, fans start their journey when “Maxxed out Max” dispatches his Sneaker Drones to seize the Air Max Grails from Airphoria, but Airie, the defender of the Air Max Grails, scatters the sneakers throughout Airphoria’s city down below, causing Airphoria to lose its power. It’s up to the players to successfully return the Air Max Grails back to their rightful place above the city. 

As part of today’s launch, Fortnite players are able to purchase the Airie and Maxxed Out Max Outfits in Fornite’s Item Shop, with a limited Airphoria inspired collection also dropping on Nike.com. 

From June 20 to June 27, players can access Airphoria island through Fortnite Discover or the island code 2118-5342-7190.

“Fortnite continues to be a primary destination for new cultural moments in entertainment, fashion, and sport, and we’re proud to launch Airphoria alongside Nike,” says Nate Nanzer, VP of Global Partnerships at Epic Games. “We know from past activations with Nike that players love the partnership, and Airphoria’s immersion, beauty and storytelling about the Air Max brand take things to a new level. Airphoria demonstrates what’s possible when next-gen tools like UEFN are used by an iconic brand like Nike and creators in the Fortnite community to build immersive worlds together.”

As for what’s happening in Nike’s direct ecosystem, its Web3 platform, .SWOOSH, is still in closed beta, but the public is encouraged to register to become a .SWOOSH member at welcome.swoosh.nike.

In other news, read about LVMH and its partnership with Epic Games and Apple.

Click here to view full gallery at Hypemoon
عرض الترجمة
US President Joe Biden to Meet With 8 AI Experts on Best Practices, Addressing Bias and Workforce...US President Joe Biden is scheduled to meet with eight business leaders in San Francisco on artificial intelligence (AI) on Tuesday as the administration pushes for a better understanding of the technology and the proper safety and privacy protections that it carries.  The heart of the meeting will center around the current challenges posed by AI on the workforce and children, the harm from AI bias, and potential benefits it carries for both education and medicine.  Those participating in the conversations include: Sal Khan, CEO of Khan Academy Inc; Jim Steyer, CEO of Common Sense Media; Tristan Harris, Executive Director of the Center for Humane Technology; Oren Etzioni, former CEO of the Allen Institute for Artificial Intelligence; Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute; Joy Buolamwini, founder of the Algorithmic Justice League; Jennifer Doudna, professor of chemistry at the University of California, Berkeley; and Rob Reich, political science professor at Stanford University Last month, Biden and Vice President Harris met with the heads of Google, Microsoft, OpenAI, and Anthropic at the White House to discuss best practices, while simultaneously announcing an investment by the Biden administration of $140 million USD to establish seven new AI research institutes.  According to a White House official, the White House Chief of Staff Jeff Zients is currently overseeing efforts to develop additional steps the Biden administration can take on AI in the coming weeks.  Earlier this month, Zients said that AI companies are working with the administration to unveil privacy and security commitments in the near future, but provided very little context.  The broad regulatory push has been exacerbated by other countries, including the European Union (EU) who are already in the works of passing what is considered to be the world’s first global regulatory framework on AI.  Last week, the EU took its first major step with the European Parliament passing a draft law known as the “A.I. Act,” which was first proposed in April 2021. While the initial draft came prior to the surge of generative AI, including chatbots, the new draft takes these into account, along with the implications they bring.  Unfortunately, one of Biden’s top AI advisers, Alexander Macgillivray, who helped write the president’s proposal for an AI Bill of Rights, left the administration on June 8.  Today is my last day in the glorious EEOB. It was a huge privilege to get to work here again as part of the Biden Administration. I am extremely grateful and more than a little sad that my time is up.1/2 pic.twitter.com/jg1JqYgKxW — Alexander Macgillivray (@amac46) June 8, 2023 Ahead of last month’s meeting, companies including Microsoft and Google committed to participating in the first independent public evaluation of their systems, according to Bloomberg.  The Commerce Department also said earlier this year that it was considering rules that could require AI models to go through a certification process before release. In other news, read about PassGPT, an AI that is trained on minimizing password leaking. Click here to view full gallery at Hypemoon

US President Joe Biden to Meet With 8 AI Experts on Best Practices, Addressing Bias and Workforce...

US President Joe Biden is scheduled to meet with eight business leaders in San Francisco on artificial intelligence (AI) on Tuesday as the administration pushes for a better understanding of the technology and the proper safety and privacy protections that it carries. 

The heart of the meeting will center around the current challenges posed by AI on the workforce and children, the harm from AI bias, and potential benefits it carries for both education and medicine. 

Those participating in the conversations include:

Sal Khan, CEO of Khan Academy Inc;

Jim Steyer, CEO of Common Sense Media;

Tristan Harris, Executive Director of the Center for Humane Technology;

Oren Etzioni, former CEO of the Allen Institute for Artificial Intelligence;

Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute;

Joy Buolamwini, founder of the Algorithmic Justice League;

Jennifer Doudna, professor of chemistry at the University of California, Berkeley; and

Rob Reich, political science professor at Stanford University

Last month, Biden and Vice President Harris met with the heads of Google, Microsoft, OpenAI, and Anthropic at the White House to discuss best practices, while simultaneously announcing an investment by the Biden administration of $140 million USD to establish seven new AI research institutes. 

According to a White House official, the White House Chief of Staff Jeff Zients is currently overseeing efforts to develop additional steps the Biden administration can take on AI in the coming weeks. 

Earlier this month, Zients said that AI companies are working with the administration to unveil privacy and security commitments in the near future, but provided very little context. 

The broad regulatory push has been exacerbated by other countries, including the European Union (EU) who are already in the works of passing what is considered to be the world’s first global regulatory framework on AI. 

Last week, the EU took its first major step with the European Parliament passing a draft law known as the “A.I. Act,” which was first proposed in April 2021. While the initial draft came prior to the surge of generative AI, including chatbots, the new draft takes these into account, along with the implications they bring. 

Unfortunately, one of Biden’s top AI advisers, Alexander Macgillivray, who helped write the president’s proposal for an AI Bill of Rights, left the administration on June 8. 

Today is my last day in the glorious EEOB. It was a huge privilege to get to work here again as part of the Biden Administration. I am extremely grateful and more than a little sad that my time is up.1/2 pic.twitter.com/jg1JqYgKxW

— Alexander Macgillivray (@amac46) June 8, 2023

Ahead of last month’s meeting, companies including Microsoft and Google committed to participating in the first independent public evaluation of their systems, according to Bloomberg. 

The Commerce Department also said earlier this year that it was considering rules that could require AI models to go through a certification process before release.

In other news, read about PassGPT, an AI that is trained on minimizing password leaking.

Click here to view full gallery at Hypemoon
عرض الترجمة
Chester Charles: the Lost Grand Master, an Alt-History Exploration of Queer Art By ClownVampOn Wednesday, June 21, artist ClownVamp (CV) will share his first solo exhibition with the world, through a physical showing taking place at The Oculus at the World Trade Center in New York City -- curated by SuperRare and powered by TransientLabs. CV shared that Chester Charles: The Lost Grand Master is an immersive artificial intelligence (AI) driven, alternate history, storytelling experience, that explores the story of self-censorship in historical queer art through the lens of his protagonist, Chester Charles. To learn more about the artist's inspirations and process, as well as the history and future of queer art, Hypemoon spoke with CV, who expressed that most of our reality is just a curated version of the truth but that AI could help expand or challenge perception. CHESTER CHARLES: The Lost Grand Master. Announcing my first-ever solo show. ?‍♂️ An immersive AI-driven story. June 21st at The Oculus - World Trade Center Produced by @SuperRare and Powered by @TransientLabs. 10 months of work, culminating. ? A thread... pic.twitter.com/u3oUEcbYRT — ClownVamp (@ClownVamp) June 7, 2023 Conversation with ClownVamp "We are living in a curated version of the truth every time we walk through a museum," - ClownVamp Sharing the goal of the exhibition, CV said he wanted a show that would challenge people's perceptions of history and the accuracy of what they've been taught. He explained that "Instead, we are living in a curated version of the truth every time we walk through a museum." To achieve this effect, CV shared that he "wanted to reflect back a fiction that rhymed with the truth. As a result, there are a lot of details intended to be strong facsimiles of what an actual retrospective would look like. The art is incredibly high resolution with brush and canvas details, each piece comes with the text of what would be on a museum label, and there was an inscription 'found' on the back of each piece." The works in question are created by CV's alt-history protagonist, Chester Charles, who is described as "a lost impressionist painter, a man who wandered life with insatiable curiosity, a man whose work was self-censored and hidden from history -- a gay man. Inspirations, Motivations, Process "I often work on random AI experiments. Sometimes because I want to explore a feeling. Sometimes to try a new tool. Sometimes just because," CV shared. Explaining how he landed on the concept of an alt-history retrospective, CV said that "Last year, I was creating some AI explorations around the concepts of fatherhood, trying to prompt a father-son walking in the woods, in a historical art style. The AI models at the time were very prone to twinning where they would replicate the subjects in your prompt," adding that "As a result, the AI model created a scene of two dads and a son. Seeing this result on the screen created this instant stir in me." The generated scene ushered in a sort of revelation for CV who said "I wasn’t used to seeing gay scenes, let alone scenes of gay parenting, in historical art. As a gay man, I didn’t realize just how much I missed that when walking through museum galleries until that moment." CV shared that he was lucky to have a long time to work on the project, allowing him to create a "mental glitch effect" that would have the audience questioning the existence of the work in legitimate history. "Over time, I iterated on the idea and got lots of feedback from smart friends including Chris from Transient and Mika from SuperRare, eventually what became clear was that the best way to tell the story was to tell it from the perspective of a fictional artist, Chester Charles," said CV. He further explained that "Much of the art canon is told through an academic lens of analysis, evaluating careers and the underlying aesthetic and biographical changes. The idea here is to use that familiar paradigm as a jumping-off point for a story." Alt-History & Self-Censorship For ClownVamp, AI represents the "ultimate remix machine," he shared that "It allows you to mix and mash styles in a way that is perfect for this sort of experiment. While the training data for these AI models doesn’t have historical queer art, it does have queer art and it does have historical art. As a result, these models can allow you to reconfigure the past by bringing these concepts together." While much of the performative art show is created as an alt-history, some aspects are historically accurate or at the very least nod to early queer art. Examples of this are found in Jim Van Buskirk's article that observes the "Queer Impressions of Gustave Caillebotte," an 18th Century French Impressionist painter whose works often focused on the male form, depicting them in submissive forms and in the case of 'Man at his Bath,' completely nude from the perspective of the male gaze. Other works like Boating Party [Oarsman in a Top Hat] see what might be considered to be "coded" queer art -- where the overall composition is nothing out of the ordinary but the focal point brings viewers' eyes to the crotch of the oarsman. Buskirk shared that "My aim here is not one of reductivism. It is not for me to determine whether or not Gustave Caillebotte might have been homosexual -- whatever that means -- but I do wonder why art historians are failing to ask questions in an effort to illuminate aspects which obviously distinguish his work from that of his contemporaries." He added, "I would hope that future examinations of Caillebotte’s oeuvre include an exploration of the 'queer' gaze so abundantly evident in his work." Back to the conversation with CV, he explained that "When we think about art that doesn’t exist, or at least that we don’t have a record of. It goes deeper than just what wasn’t celebrated, or what wasn’t collected. It also goes to what wasn’t even made. Self-censorship is the way that queer people have often had to navigate their environments." "Say the wrong thing and suffer consequences ranging from shame to literal death. The result is that there is a massive amount of human potential that was never even expressed. What could have been? What would have been? What should have been?" the artist put forward, explaining the significance of AI as a tool to further explore these questions. Question Reality "My goal is for the viewer to realize that what we think of as deeply factual or even academic, is really a flawed reality," shared CV adding that "It represents just what was allowed to be said, let alone recorded for posterity." He further explained that "By showing an altered version of reality, my aim is for people to confront this. The show is meant to both transport you to a different time, but also feel unsettling at the same time. I want you to start to question the realities you face every day, and to think about how are some of these constraints still around today." As CV looked to make the alt-history work as believable as possible, he mentioned that paint textures were a crucial aspect and explained that "One of the biggest breakthroughs was figuring out how to nail the painting textures I was looking for, [which he did] thanks to some great tips from friend and fellow artist Henry Daubrez and way too much time experimenting." "The goal of the show is to make you question reality and so making it feel as close to real as possible was essential. I want you to smell the canvas when you look at the painting, especially when they’re blown up on the large screens of the physical show," he said. Speaking on his partners, TransientLabs and SuperRare, CV said that "Chris Ostoich [COO] has been a lover and collector of AI work since before he joined Transient and has been a phenomenal brainstorming partner for how we can best leverage technology, narrative, and aesthetics to all work together." As for the curation team from SuperRare, CV said that "Mika, Linda, and the SuperRare team have been phenomenal. I had thousands of pieces to pick from and they helped me craft the ideal aesthetic narrative that would also work on a storytelling level." Those interested in viewing the solo exhibition, powered by TransientLabs and Curated by SuperRare will be able to attend both on-chain and in-person viewings starting June 21. A total of 23 pieces will be shown, with three to be auctioned on SuperRare starting at a 1 ETH reserve price -- additionally, the show will include an ETH and Tezos-based open edition. Three pieces will be available via SuperRare auctions. The reserve will be set to 1 ETH on Wednesday at 10AM. Three pieces available:- Doves by the Sea II, 1882- The Crowded Stage, 1905- Self Portrait, 1938 (Remaining pieces will be minted upon request after the show) pic.twitter.com/oUT5Kdj3oJ — ClownVamp (@ClownVamp) June 19, 2023 "Between 'Chester Charles' and his guest curation, ClownVamp is marrying the queer medium of AI with the legacies of queer artists, past, present, or fictional, unsung or in hiding, erased or hypervisible," said SuperRare. Elsewhere in art, see how ‘Human Unreadable’ showcases art longevity through an emotionally-driven experience. Click here to view full gallery at Hypemoon

Chester Charles: the Lost Grand Master, an Alt-History Exploration of Queer Art By ClownVamp

On Wednesday, June 21, artist ClownVamp (CV) will share his first solo exhibition with the world, through a physical showing taking place at The Oculus at the World Trade Center in New York City -- curated by SuperRare and powered by TransientLabs.

CV shared that Chester Charles: The Lost Grand Master is an immersive artificial intelligence (AI) driven, alternate history, storytelling experience, that explores the story of self-censorship in historical queer art through the lens of his protagonist, Chester Charles.

To learn more about the artist's inspirations and process, as well as the history and future of queer art, Hypemoon spoke with CV, who expressed that most of our reality is just a curated version of the truth but that AI could help expand or challenge perception.

CHESTER CHARLES: The Lost Grand Master.

Announcing my first-ever solo show. ?‍♂️

An immersive AI-driven story.

June 21st at The Oculus - World Trade Center

Produced by @SuperRare and Powered by @TransientLabs.

10 months of work, culminating. ?

A thread... pic.twitter.com/u3oUEcbYRT

— ClownVamp (@ClownVamp) June 7, 2023

Conversation with ClownVamp

"We are living in a curated version of the truth every time we walk through a museum," - ClownVamp

Sharing the goal of the exhibition, CV said he wanted a show that would challenge people's perceptions of history and the accuracy of what they've been taught. He explained that "Instead, we are living in a curated version of the truth every time we walk through a museum."

To achieve this effect, CV shared that he "wanted to reflect back a fiction that rhymed with the truth. As a result, there are a lot of details intended to be strong facsimiles of what an actual retrospective would look like. The art is incredibly high resolution with brush and canvas details, each piece comes with the text of what would be on a museum label, and there was an inscription 'found' on the back of each piece."

The works in question are created by CV's alt-history protagonist, Chester Charles, who is described as "a lost impressionist painter, a man who wandered life with insatiable curiosity, a man whose work was self-censored and hidden from history -- a gay man.

Inspirations, Motivations, Process

"I often work on random AI experiments. Sometimes because I want to explore a feeling. Sometimes to try a new tool. Sometimes just because," CV shared.

Explaining how he landed on the concept of an alt-history retrospective, CV said that "Last year, I was creating some AI explorations around the concepts of fatherhood, trying to prompt a father-son walking in the woods, in a historical art style. The AI models at the time were very prone to twinning where they would replicate the subjects in your prompt," adding that "As a result, the AI model created a scene of two dads and a son. Seeing this result on the screen created this instant stir in me."

The generated scene ushered in a sort of revelation for CV who said "I wasn’t used to seeing gay scenes, let alone scenes of gay parenting, in historical art. As a gay man, I didn’t realize just how much I missed that when walking through museum galleries until that moment."

CV shared that he was lucky to have a long time to work on the project, allowing him to create a "mental glitch effect" that would have the audience questioning the existence of the work in legitimate history.

"Over time, I iterated on the idea and got lots of feedback from smart friends including Chris from Transient and Mika from SuperRare, eventually what became clear was that the best way to tell the story was to tell it from the perspective of a fictional artist, Chester Charles," said CV.

He further explained that "Much of the art canon is told through an academic lens of analysis, evaluating careers and the underlying aesthetic and biographical changes. The idea here is to use that familiar paradigm as a jumping-off point for a story."

Alt-History & Self-Censorship

For ClownVamp, AI represents the "ultimate remix machine," he shared that "It allows you to mix and mash styles in a way that is perfect for this sort of experiment. While the training data for these AI models doesn’t have historical queer art, it does have queer art and it does have historical art. As a result, these models can allow you to reconfigure the past by bringing these concepts together."

While much of the performative art show is created as an alt-history, some aspects are historically accurate or at the very least nod to early queer art.

Examples of this are found in Jim Van Buskirk's article that observes the "Queer Impressions of Gustave Caillebotte," an 18th Century French Impressionist painter whose works often focused on the male form, depicting them in submissive forms and in the case of 'Man at his Bath,' completely nude from the perspective of the male gaze.

Other works like Boating Party [Oarsman in a Top Hat] see what might be considered to be "coded" queer art -- where the overall composition is nothing out of the ordinary but the focal point brings viewers' eyes to the crotch of the oarsman.

Buskirk shared that "My aim here is not one of reductivism. It is not for me to determine whether or not Gustave Caillebotte might have been homosexual -- whatever that means -- but I do wonder why art historians are failing to ask questions in an effort to illuminate aspects which obviously distinguish his work from that of his contemporaries."

He added, "I would hope that future examinations of Caillebotte’s oeuvre include an exploration of the 'queer' gaze so abundantly evident in his work."

Back to the conversation with CV, he explained that "When we think about art that doesn’t exist, or at least that we don’t have a record of. It goes deeper than just what wasn’t celebrated, or what wasn’t collected. It also goes to what wasn’t even made. Self-censorship is the way that queer people have often had to navigate their environments."

"Say the wrong thing and suffer consequences ranging from shame to literal death. The result is that there is a massive amount of human potential that was never even expressed. What could have been? What would have been? What should have been?" the artist put forward, explaining the significance of AI as a tool to further explore these questions.

Question Reality

"My goal is for the viewer to realize that what we think of as deeply factual or even academic, is really a flawed reality," shared CV adding that "It represents just what was allowed to be said, let alone recorded for posterity."

He further explained that "By showing an altered version of reality, my aim is for people to confront this. The show is meant to both transport you to a different time, but also feel unsettling at the same time. I want you to start to question the realities you face every day, and to think about how are some of these constraints still around today."

As CV looked to make the alt-history work as believable as possible, he mentioned that paint textures were a crucial aspect and explained that "One of the biggest breakthroughs was figuring out how to nail the painting textures I was looking for, [which he did] thanks to some great tips from friend and fellow artist Henry Daubrez and way too much time experimenting."

"The goal of the show is to make you question reality and so making it feel as close to real as possible was essential. I want you to smell the canvas when you look at the painting, especially when they’re blown up on the large screens of the physical show," he said.

Speaking on his partners, TransientLabs and SuperRare, CV said that "Chris Ostoich [COO] has been a lover and collector of AI work since before he joined Transient and has been a phenomenal brainstorming partner for how we can best leverage technology, narrative, and aesthetics to all work together."

As for the curation team from SuperRare, CV said that "Mika, Linda, and the SuperRare team have been phenomenal. I had thousands of pieces to pick from and they helped me craft the ideal aesthetic narrative that would also work on a storytelling level."

Those interested in viewing the solo exhibition, powered by TransientLabs and Curated by SuperRare will be able to attend both on-chain and in-person viewings starting June 21. A total of 23 pieces will be shown, with three to be auctioned on SuperRare starting at a 1 ETH reserve price -- additionally, the show will include an ETH and Tezos-based open edition.

Three pieces will be available via SuperRare auctions.

The reserve will be set to 1 ETH on Wednesday at 10AM.

Three pieces available:- Doves by the Sea II, 1882- The Crowded Stage, 1905- Self Portrait, 1938

(Remaining pieces will be minted upon request after the show) pic.twitter.com/oUT5Kdj3oJ

— ClownVamp (@ClownVamp) June 19, 2023

"Between 'Chester Charles' and his guest curation, ClownVamp is marrying the queer medium of AI with the legacies of queer artists, past, present, or fictional, unsung or in hiding, erased or hypervisible," said SuperRare.

Elsewhere in art, see how ‘Human Unreadable’ showcases art longevity through an emotionally-driven experience.

Click here to view full gallery at Hypemoon
عرض الترجمة
Dmitri Cherniak's 'The Goose' Sells for $6.2M At Sotheby's 3AC AuctionScroll for any amount of time on NFT Twitter and you'll see renditions and remixes of Dmitri Cherniak's Ringers #879 aka "The Goose," as artists across the space celebrate Cherniak's significant Sotheby's sale, in what has been dubbed Goose Day. The Goose was auctioned as part of a continuance of the "Grails" collection sale, consisting of works once owned by the now defunct Three Arrows Capital (3AC) group. Estimated to sell between two and three million dollars, The Goose surprised bidders and the Web3 space alike, realizing over double at $6.2 million USD after fees to Punk6529 -- who was "prepared to go higher." Take a closer look at @dmitricherniak's Ringers #879, famously nicknamed 'The Goose.' Experts @katehannah, @sofiagarcia_io, and @michaelbouhanna discuss its captivating history & cultural significance. Discover more: https://t.co/pFiMNtYSpc pic.twitter.com/k28zaC6VbP — Sotheby's Metaverse (@Sothebysverse) June 9, 2023 ... and the final two minutes as the hammer came down, setting a new record for the artist and the 2nd highest price for a work of generative art. Congrats @punk6529! pic.twitter.com/QYzPcFIBmE — Sotheby's Metaverse (@Sothebysverse) June 15, 2023 What makes The Goose so significant though and why has it now sold for millions of dollars not once but twice? The best answer to these questions comes from Cherniak himself, who in a recent tweet said "The Goose is and was significant because it helped open up this kind of art to a new, technically savvy group of people whose idea of creativity or culture is not the same as yours." "Computer and code-based art is an art form that has been around for almost a century with very little fanfare, he explained, adding that "It is an extremely fascinating art form and has a rich history. It has been despised at many points throughout its history and its innovators were harassed. I am not an innovator in this sense, I have been able to develop my practice using the tools and documentation, techniques, as well as open source libraries mostly made by those who are my senior, and have contributed back where I can." Cherniak continued to state, "Automation is my artistic medium and after spending years as an engineer solving fascinating and complex problems in creative ways, I wanted to do the same for visual art to make a point - maybe to myself and also to others as well. NFTs have been mostly discussed as an economic vehicle and a form of social mobility for artists but for artists using code, where algorithms, engineering practices, and randomness are so intertwined not just with output but our iterative practices, this kind of distributed computing system is a native form to our work." While some verticals publish headlines like "3AC Bankruptcy Auction Nets $11M in What Might be Final Hurrah for NFTs," Cherniak said that the art form is "not going away and only more and more people will engage with coding as our population is forced to become more technical." In other news, on-chain generative choreography in "Human Unreadable" showcases art longevity through an emotionally-driven collection experience. Click here to view full gallery at Hypemoon

Dmitri Cherniak's 'The Goose' Sells for $6.2M At Sotheby's 3AC Auction

Scroll for any amount of time on NFT Twitter and you'll see renditions and remixes of Dmitri Cherniak's Ringers #879 aka "The Goose," as artists across the space celebrate Cherniak's significant Sotheby's sale, in what has been dubbed Goose Day.

The Goose was auctioned as part of a continuance of the "Grails" collection sale, consisting of works once owned by the now defunct Three Arrows Capital (3AC) group.

Estimated to sell between two and three million dollars, The Goose surprised bidders and the Web3 space alike, realizing over double at $6.2 million USD after fees to Punk6529 -- who was "prepared to go higher."

Take a closer look at @dmitricherniak's Ringers #879, famously nicknamed 'The Goose.' Experts @katehannah, @sofiagarcia_io, and @michaelbouhanna discuss its captivating history & cultural significance. Discover more: https://t.co/pFiMNtYSpc pic.twitter.com/k28zaC6VbP

— Sotheby's Metaverse (@Sothebysverse) June 9, 2023

... and the final two minutes as the hammer came down, setting a new record for the artist and the 2nd highest price for a work of generative art. Congrats @punk6529! pic.twitter.com/QYzPcFIBmE

— Sotheby's Metaverse (@Sothebysverse) June 15, 2023

What makes The Goose so significant though and why has it now sold for millions of dollars not once but twice? The best answer to these questions comes from Cherniak himself, who in a recent tweet said "The Goose is and was significant because it helped open up this kind of art to a new, technically savvy group of people whose idea of creativity or culture is not the same as yours."

"Computer and code-based art is an art form that has been around for almost a century with very little fanfare, he explained, adding that "It is an extremely fascinating art form and has a rich history. It has been despised at many points throughout its history and its innovators were harassed. I am not an innovator in this sense, I have been able to develop my practice using the tools and documentation, techniques, as well as open source libraries mostly made by those who are my senior, and have contributed back where I can."

Cherniak continued to state, "Automation is my artistic medium and after spending years as an engineer solving fascinating and complex problems in creative ways, I wanted to do the same for visual art to make a point - maybe to myself and also to others as well. NFTs have been mostly discussed as an economic vehicle and a form of social mobility for artists but for artists using code, where algorithms, engineering practices, and randomness are so intertwined not just with output but our iterative practices, this kind of distributed computing system is a native form to our work."

While some verticals publish headlines like "3AC Bankruptcy Auction Nets $11M in What Might be Final Hurrah for NFTs," Cherniak said that the art form is "not going away and only more and more people will engage with coding as our population is forced to become more technical."

In other news, on-chain generative choreography in "Human Unreadable" showcases art longevity through an emotionally-driven collection experience.

Click here to view full gallery at Hypemoon
عرض الترجمة
LVMH Announces Partnerships With Epic Games and AppleDuring its most recent Innovation Award ceremony and show, Viva Tech, LVMH, the luxury conglomerate behind Louis Vuitton, revealed two new partnerships -- one with Epic Games to transform its creative pipeline and another with Apple to integrate the "Tap to Pay" system in U.S. stores. The announcements come on the heels of Louis Vuitton's recently revealed VIA Treasure Trunk, which represents the first time the Maison has offered any of its luxury trunks in a digital or NFT form. The show also included metaverse experiences and awards for those building tools that utilize artificial intelligence (AI). At @VivaTech, LVMH and Epic Games announce partnership to transform the Group’s creative pipeline and bring customers new types of immersive products and discovery experiences. Learn More: https://t.co/f7phMjbhbI#LVMH #MetaHuman #VivaTech @UnrealEngine pic.twitter.com/UIUe4efVZL — LVMH (@LVMH) June 14, 2023 Through its partnership with Epic Games, known for its Fortnite title and Unreal Engine platform, LVMH hopes to bring customers exciting new experiences including virtual fitting rooms, fashion shows, 360 product carousels, augmented reality (AR), product digital twins, and more. LVMH shared that, to achieve these goals, it would utilize Epic's suite of cutting-edge 3D creation tools, including Unreal Engine, Reality Capture, Twinmotion, and MetaHuman, to accelerate its growth in the digital space. "We have always been committed to innovations with the potential to bring our customers new experiences. Interactive games, which have developed into a full-fledged cultural phenomenon, are a perfect example. The partnership with Epic Games will accelerate our expertise in 3D tools and ecosystems, from the creation of new collections to ad campaigns and to our Maisons’ websites," shared Toni Belloni, LVMH Group Managing Director. Bill Clifford, VP of Unreal Engine at Epic Games, also shared his excitement about the partnership, emphasizing the transformative potential of Epic's suite of advanced creator tools. Clifford stated, "With this partnership, we will work with LVMH's designers to transform physical and digital product creation using Epic's suite of advanced creator tools. We are excited to accelerate the Group's adoption of Unreal Engine, Reality Capture, Twinmotion, and MetaHuman technology and help LVMH's global brands engage with customers through immersive digital experiences." Other highlights of the Viva Tech show included the announced plans to integrate Apple's "Tap to Pay" into physical retail stores, starting with those in the U.S. -- which the Maison expressed will create "an exciting new in-store experience." Additionally, LVMH held a variety of award ceremonies, including those for the AI sector, with Gonzague de Pirey, LVMH Group Omnichannel & Data Officer stating that "Data and AI figure at the heart of all the solutions from the startups we recognized today." The group also revealed new metaverse experiences through a virtual world called "The Journey," which prompts visitors to select from a variety of portals. Once a portal is selected, visitors are introduced to a variety of interactive elements, some containing AI artwork, information on design, videos that introduce creative teams, and more. Whether through its blockchain consortium platform Aura or the launch of the VIA Treasure Trunk NFT, LVMH has shown that it is committed to an elevated and continued pursuit of the digital space. In other news, could the blockchain change the face of watch trading? Click here to view full gallery at Hypemoon

LVMH Announces Partnerships With Epic Games and Apple

During its most recent Innovation Award ceremony and show, Viva Tech, LVMH, the luxury conglomerate behind Louis Vuitton, revealed two new partnerships -- one with Epic Games to transform its creative pipeline and another with Apple to integrate the "Tap to Pay" system in U.S. stores.

The announcements come on the heels of Louis Vuitton's recently revealed VIA Treasure Trunk, which represents the first time the Maison has offered any of its luxury trunks in a digital or NFT form. The show also included metaverse experiences and awards for those building tools that utilize artificial intelligence (AI).

At @VivaTech, LVMH and Epic Games announce partnership to transform the Group’s creative pipeline and bring customers new types of immersive products and discovery experiences.

Learn More: https://t.co/f7phMjbhbI#LVMH #MetaHuman #VivaTech @UnrealEngine pic.twitter.com/UIUe4efVZL

— LVMH (@LVMH) June 14, 2023

Through its partnership with Epic Games, known for its Fortnite title and Unreal Engine platform, LVMH hopes to bring customers exciting new experiences including virtual fitting rooms, fashion shows, 360 product carousels, augmented reality (AR), product digital twins, and more.

LVMH shared that, to achieve these goals, it would utilize Epic's suite of cutting-edge 3D creation tools, including Unreal Engine, Reality Capture, Twinmotion, and MetaHuman, to accelerate its growth in the digital space.

"We have always been committed to innovations with the potential to bring our customers new experiences. Interactive games, which have developed into a full-fledged cultural phenomenon, are a perfect example. The partnership with Epic Games will accelerate our expertise in 3D tools and ecosystems, from the creation of new collections to ad campaigns and to our Maisons’ websites," shared Toni Belloni, LVMH Group Managing Director.

Bill Clifford, VP of Unreal Engine at Epic Games, also shared his excitement about the partnership, emphasizing the transformative potential of Epic's suite of advanced creator tools. Clifford stated, "With this partnership, we will work with LVMH's designers to transform physical and digital product creation using Epic's suite of advanced creator tools. We are excited to accelerate the Group's adoption of Unreal Engine, Reality Capture, Twinmotion, and MetaHuman technology and help LVMH's global brands engage with customers through immersive digital experiences."

Other highlights of the Viva Tech show included the announced plans to integrate Apple's "Tap to Pay" into physical retail stores, starting with those in the U.S. -- which the Maison expressed will create "an exciting new in-store experience."

Additionally, LVMH held a variety of award ceremonies, including those for the AI sector, with Gonzague de Pirey, LVMH Group Omnichannel & Data Officer stating that "Data and AI figure at the heart of all the solutions from the startups we recognized today."

The group also revealed new metaverse experiences through a virtual world called "The Journey," which prompts visitors to select from a variety of portals. Once a portal is selected, visitors are introduced to a variety of interactive elements, some containing AI artwork, information on design, videos that introduce creative teams, and more.

Whether through its blockchain consortium platform Aura or the launch of the VIA Treasure Trunk NFT, LVMH has shown that it is committed to an elevated and continued pursuit of the digital space.

In other news, could the blockchain change the face of watch trading?

Click here to view full gallery at Hypemoon
عرض الترجمة
Love Visiting State & National Parks? California State Parks Harnesses 'AR' Tech With New Interac...“I encourage everybody to hop on Google and type in ‘national park’ in whatever state they live in and see the beauty that lies in their own backyard. It’s that simple.” – Jordan Fisher, singer/songwriter Earlier this month, the California Department of Parks and Recreation shared an exciting announcement that makes for a fun utilization of augmented reality (AR) technology for those who appreciate the outdoors and the beauty our state and national parks have to offer, beginning with its very own parks.  California State Parks launched ‘Virtual Adventurer,’ a new AR mobile app that will transform how park visitors connect to and interact with California’s most iconic locations in addition to its deep history and diverse cultural and natural landscapes. Spanning across 9 participating state parks, including: Anza-Borrego Desert State Park Bodie State Historic Park Colonel Allensworth State Historic Park Jack London State Historic Park Montana de Oro State Park Oceano Dunes State Vehicular Recreation Area (Oso Flaco Lake) Old Town San Diego State Historic Park Point Lobos State Natural Reserve Sue-meg State Park Diving into the mobile app’s underlying AR technology, Virtual Adventurer offers the public experiences that span from storytelling and holograms to 3D images and digital reconstructions that all highlight California’s various cultural, historic, and natural resources.  The development of Virtual Adventurer was led by TimeLooper Inc., an immersive digital experience and exhibition firm.  “[California] State Parks came to us with a vision to expand the scope of stories told in its parks in a manner that is highly immersive and relevant to today’s park visitors,” said TimeLooper Principal and Founder Andrew Feinberg.  Unique to the experience is the app’s dynamic and evolving storytelling, that has also been designed to be one of the most accessible mobile apps on the market, offering users access to Americans with Disabilities Act-compliant accessible PDFs, audio descriptions, audio captioning, high-contrast colors, dyslexic font, and more – all with the intention of ensuring the highest level of accessibility to anyone who wants to use immerse themselves in this application of augmented reality.  For example, the public can download and travel through Coyote Canyon in today’s Anza-Borrego Desert State Park, with Maria Jacinta Bastida, an Afro-Latina woman traveling with the Juan Bautista De Anza expedition, or see Chinatown reemerge from the sagebrush at Bodie State Historic Park.  Virtual Adventurer, according to California State Parks, will also be updated regularly to include newly added adventures and stories that help enrich the overall experience of spending time in these state parks.  “We’re excited to launch the Virtual Adventurer app that further provides opportunities for Californians to access the cultural, historic and natural resources found across our beautiful state,” said California State Parks Director Armando Quintero. “The app also supports and enhances the department’s Reexamining Our Past Initiative by developing content for parks that tells a more complete, accurate and inclusive history of people and places.” Visitors are encouraged to scan the below QR code to get started exploring California’s state parks: “Helping park visitors to create deeper and more meaningful experiences in state parks is vitally important to connecting us all to the rich history of these places,” said Parks California Community Engagement Director Myrian Solis Coronel. “Through this app and emerging digital technology, we hope these tools will help all visitors see themselves as part of these special places and feel a sense of belonging.” Parks California, along with other park partners like Jack London Park Partners, Point Lobos Foundation, Tribal Nations, and university partners are also supporting content development. WATCH: Hypemoon’s Bon Jenn walks us through the new Apple Vision Pro VR headset. Click here to view full gallery at Hypemoon

Love Visiting State & National Parks? California State Parks Harnesses 'AR' Tech With New Interac...

“I encourage everybody to hop on Google and type in ‘national park’ in whatever state they live in and see the beauty that lies in their own backyard. It’s that simple.”

– Jordan Fisher, singer/songwriter

Earlier this month, the California Department of Parks and Recreation shared an exciting announcement that makes for a fun utilization of augmented reality (AR) technology for those who appreciate the outdoors and the beauty our state and national parks have to offer, beginning with its very own parks. 

California State Parks launched ‘Virtual Adventurer,’ a new AR mobile app that will transform how park visitors connect to and interact with California’s most iconic locations in addition to its deep history and diverse cultural and natural landscapes.

Spanning across 9 participating state parks, including:

Anza-Borrego Desert State Park

Bodie State Historic Park

Colonel Allensworth State Historic Park

Jack London State Historic Park

Montana de Oro State Park

Oceano Dunes State Vehicular Recreation Area (Oso Flaco Lake)

Old Town San Diego State Historic Park

Point Lobos State Natural Reserve

Sue-meg State Park

Diving into the mobile app’s underlying AR technology, Virtual Adventurer offers the public experiences that span from storytelling and holograms to 3D images and digital reconstructions that all highlight California’s various cultural, historic, and natural resources. 

The development of Virtual Adventurer was led by TimeLooper Inc., an immersive digital experience and exhibition firm. 

“[California] State Parks came to us with a vision to expand the scope of stories told in its parks in a manner that is highly immersive and relevant to today’s park visitors,” said TimeLooper Principal and Founder Andrew Feinberg. 

Unique to the experience is the app’s dynamic and evolving storytelling, that has also been designed to be one of the most accessible mobile apps on the market, offering users access to Americans with Disabilities Act-compliant accessible PDFs, audio descriptions, audio captioning, high-contrast colors, dyslexic font, and more – all with the intention of ensuring the highest level of accessibility to anyone who wants to use immerse themselves in this application of augmented reality. 

For example, the public can download and travel through Coyote Canyon in today’s Anza-Borrego Desert State Park, with Maria Jacinta Bastida, an Afro-Latina woman traveling with the Juan Bautista De Anza expedition, or see Chinatown reemerge from the sagebrush at Bodie State Historic Park. 

Virtual Adventurer, according to California State Parks, will also be updated regularly to include newly added adventures and stories that help enrich the overall experience of spending time in these state parks. 

“We’re excited to launch the Virtual Adventurer app that further provides opportunities for Californians to access the cultural, historic and natural resources found across our beautiful state,” said California State Parks Director Armando Quintero. “The app also supports and enhances the department’s Reexamining Our Past Initiative by developing content for parks that tells a more complete, accurate and inclusive history of people and places.”

Visitors are encouraged to scan the below QR code to get started exploring California’s state parks:

“Helping park visitors to create deeper and more meaningful experiences in state parks is vitally important to connecting us all to the rich history of these places,” said Parks California Community Engagement Director Myrian Solis Coronel. “Through this app and emerging digital technology, we hope these tools will help all visitors see themselves as part of these special places and feel a sense of belonging.” Parks California, along with other park partners like Jack London Park Partners, Point Lobos Foundation, Tribal Nations, and university partners are also supporting content development.

WATCH: Hypemoon’s Bon Jenn walks us through the new Apple Vision Pro VR headset.

Click here to view full gallery at Hypemoon
تم إحياء القطعة الأثرية الأكثر شهرة في موسيقى الهيب هوب في فيلم "التاج" للمصور بارون كليبورن"لا يمكننا تغيير العالم حتى نغير أنفسنا."  - كبير سيئة السمعة. هذا الأسبوع، قدم المصور بارون كليبورن أحدث مقتنياته الرقمية، "The Crown"، والتي تصور التاج الأصلي الذي يرتديه مغني الراب The Notorious BIG. في عام 1997، كجزء من مجموعة King of New York - KONY NFT. مجموعة KONY NFT هي مجموعة محدودة تتكون من ستة عناصر، والتي تشمل NFTs المادية والرقمية وعناصر المزادات والأعمال الحصرية التي تعتمد جميعها على جلسة تصوير KONY التي قام بها كليبورن عام 1997 والتي تصور بيجي، واسمه الحقيقي كريستوفر لاتور والاس (1972-1997). ). 

تم إحياء القطعة الأثرية الأكثر شهرة في موسيقى الهيب هوب في فيلم "التاج" للمصور بارون كليبورن

"لا يمكننا تغيير العالم حتى نغير أنفسنا." 

- كبير سيئة السمعة.

هذا الأسبوع، قدم المصور بارون كليبورن أحدث مقتنياته الرقمية، "The Crown"، والتي تصور التاج الأصلي الذي يرتديه مغني الراب The Notorious BIG. في عام 1997، كجزء من مجموعة King of New York - KONY NFT.

مجموعة KONY NFT هي مجموعة محدودة تتكون من ستة عناصر، والتي تشمل NFTs المادية والرقمية وعناصر المزادات والأعمال الحصرية التي تعتمد جميعها على جلسة تصوير KONY التي قام بها كليبورن عام 1997 والتي تصور بيجي، واسمه الحقيقي كريستوفر لاتور والاس (1972-1997). ). 
هل يمكن لتقنية البلوكشين أن تغير وجه تجارة الساعات؟في يوم الأربعاء 14 يونيو، تم تأمين ساعتين من ماركة رولكس، "Pepsi" GMT Master II وMilguas Blue Dial لتأمين قرض بقيمة 14500 دولار أمريكي - كلها على السلسلة. تم الحصول على القرض من خلال جهد تعاوني بين بروتوكول إقراض DeFi Arcade.xyz وبروتوكول 4K وهي منصة تسهل سك NFT المادية والخدمات اللوجستية للأسواق والعلامات التجارية والتطبيقات اللامركزية والمزيد. يتم استخدام ساعات @ROLEX هذه، المخزنة في @4KProtocol، كضمان لقروض DeFi على Arcade. إن استخدام الأصول الواقعية (RWAs) مثل السلع الفاخرة على السلسلة قد يفتح سوقًا ضخمًا لـ DeFi. pic.twitter.com/17JB2R7z6I

هل يمكن لتقنية البلوكشين أن تغير وجه تجارة الساعات؟

في يوم الأربعاء 14 يونيو، تم تأمين ساعتين من ماركة رولكس، "Pepsi" GMT Master II وMilguas Blue Dial لتأمين قرض بقيمة 14500 دولار أمريكي - كلها على السلسلة.

تم الحصول على القرض من خلال جهد تعاوني بين بروتوكول إقراض DeFi Arcade.xyz وبروتوكول 4K وهي منصة تسهل سك NFT المادية والخدمات اللوجستية للأسواق والعلامات التجارية والتطبيقات اللامركزية والمزيد.

يتم استخدام ساعات @ROLEX هذه، المخزنة في @4KProtocol، كضمان لقروض DeFi على Arcade.

إن استخدام الأصول الواقعية (RWAs) مثل السلع الفاخرة على السلسلة قد يفتح سوقًا ضخمًا لـ DeFi. pic.twitter.com/17JB2R7z6I
البرلمان الأوروبي يوافق للتو على "قانون الذكاء الاصطناعي" التاريخي، وهو أثقل من اللائحة العامة لحماية البياناتاتخذ الاتحاد الأوروبي خطوة كبيرة يوم الأربعاء في منح العالم أول مجموعة من قواعد الحوكمة المحيطة بالذكاء الاصطناعي، حيث أقر البرلمان الأوروبي مشروع قانون يُعرف باسم قانون الذكاء الاصطناعي. لقد تم الترويج لقانون الذكاء الاصطناعي، الذي اقترحه الاتحاد الأوروبي لأول مرة في 21 أبريل 2021، باعتباره أول إطار تنظيمي شامل في العالم. ومنذ اقتراحه، تعمل المفوضية الأوروبية ومجلس الاتحاد الأوروبي والبرلمان الأوروبي على تعديل وتنقيح مسودته الأولية، ومن غير المتوقع أن يتم إصدار النسخة النهائية قبل وقت لاحق من هذا العام.

البرلمان الأوروبي يوافق للتو على "قانون الذكاء الاصطناعي" التاريخي، وهو أثقل من اللائحة العامة لحماية البيانات

اتخذ الاتحاد الأوروبي خطوة كبيرة يوم الأربعاء في منح العالم أول مجموعة من قواعد الحوكمة المحيطة بالذكاء الاصطناعي، حيث أقر البرلمان الأوروبي مشروع قانون يُعرف باسم قانون الذكاء الاصطناعي.

لقد تم الترويج لقانون الذكاء الاصطناعي، الذي اقترحه الاتحاد الأوروبي لأول مرة في 21 أبريل 2021، باعتباره أول إطار تنظيمي شامل في العالم. ومنذ اقتراحه، تعمل المفوضية الأوروبية ومجلس الاتحاد الأوروبي والبرلمان الأوروبي على تعديل وتنقيح مسودته الأولية، ومن غير المتوقع أن يتم إصدار النسخة النهائية قبل وقت لاحق من هذا العام.
سافر حول العالم مع سنوب دوج من خلال رمز جواز السفر الديناميكي الذي تدعمه Transient Labsبينما يستعد سنوب دوج لجولته العالمية القادمة مع Wiz Khalifa، قرر التعاون مع منصة الابتكار Web3 Transient Labs لإطلاق NFT من طبقة Ethereum 2 على سلسلة Arbitrum. يشار إلى هذه البطاقة باسم "جواز سفر سنوب دوج" أو "جواز السفر" ببساطة، حيث تأخذ هواة الجمع في جولة مع سنوب دوج، وتوفر تجربة جمع ديناميكية مع محتوى يتم تحديثه باستمرار يأخذ حامليها إلى الكواليس وفي حافلة الجولة. قال كريس أوستويتش، مدير العمليات في Transient Labs، في محادثة مع Coindesk: "في الواقع، ما يمثله جواز السفر هو ثلاثة أشياء: نظرة خلف الكواليس، مع محتوى ديناميكي للمعجبين، والوصول إلى موقع ويب سيتم إنشاؤه قريبًا مع البضائع والموسيقى، وثالثًا، هناك عمليات الإنزال الجوي من بعض أعظم المبدعين في النظم البيئية الرقمية".

سافر حول العالم مع سنوب دوج من خلال رمز جواز السفر الديناميكي الذي تدعمه Transient Labs

بينما يستعد سنوب دوج لجولته العالمية القادمة مع Wiz Khalifa، قرر التعاون مع منصة الابتكار Web3 Transient Labs لإطلاق NFT من طبقة Ethereum 2 على سلسلة Arbitrum.

يشار إلى هذه البطاقة باسم "جواز سفر سنوب دوج" أو "جواز السفر" ببساطة، حيث تأخذ هواة الجمع في جولة مع سنوب دوج، وتوفر تجربة جمع ديناميكية مع محتوى يتم تحديثه باستمرار يأخذ حامليها إلى الكواليس وفي حافلة الجولة.

قال كريس أوستويتش، مدير العمليات في Transient Labs، في محادثة مع Coindesk: "في الواقع، ما يمثله جواز السفر هو ثلاثة أشياء: نظرة خلف الكواليس، مع محتوى ديناميكي للمعجبين، والوصول إلى موقع ويب سيتم إنشاؤه قريبًا مع البضائع والموسيقى، وثالثًا، هناك عمليات الإنزال الجوي من بعض أعظم المبدعين في النظم البيئية الرقمية".
عرض الترجمة
Why ‘Human Unreadable’ Showcases Art Longevity Through an Emotionally-Driven Collection ExperienceOver the course of nine months and pouring their life’s work into what has been one of Art Block’s most successful projects, Dejha Ti and Ania Catherine developed an on-chain generative choreography method that serves as the backbone to their now sold-out “Human Unreadable” digital art collection.  Having minted out within 30 minutes, “Human Unreadable” is the brainchild of both Catherine and Ti, who have spent countless hours in the creation of a method that prioritizes “human messiness and chaos” within a highly mathematical and engineering heavy process. Catherine and Ti are an award-winning experiential artist duo who create through their collective art practice, Operator, which they launched in 2016.  As two “critical contemporary voices” on digital art’s international stages, the duo and ‘LGBT power couple’ welcome their expertises to collide in large scale conceptual works that are highly recognized for their nuanced integration of emerging technologies.  Ti, whose background as an immersive artist and human-computer Interaction technologist, and Catherine’s as a choreographer and performance artists bring together two environments that showcase a beautiful harmony of our current digital infrastructure with that of Web3.  The Berlin-based duo They have appeared on BBC Click, Bloomberg ART+TECHNOLOGY, Christie’s Art+Tech Summit, SCAD Museum of Art, MIT Open Doc Lab, Art Basel, and many more.  Spanning across a three-act play – Reveal, Decipher, and Witness – Human Unreadable’s story unfolds over the course of several months, with the artwork reveal taking place this spring, the uncovering of the choreographies used to create the generative model at the end of June, and lastly, a live performance of those choreographies from the first 100 pieces in the collection (#2 to #101) later this year.  In bringing the pieces of Human Unreadable to life, Ti and Catherine built a team of more than 25 people – from highly experienced engineers to professional dancers – to help give life to the choreography as it was combined with black-and-white portrait photos of them, X-ray shading, and generative glass objects.  With choreography at the heart of Human Unreadable, Catherine and Li have proudly defended against ever wanting to separate the underlying choreography from the secondary token that's bound to the primary Art Blocks token, because it’s that choreographic score and unique sequence that generated the Art Blocks token to begin with.  “Everyone assumes that the reveal of the artwork is the end of the story,” Catherine stated in a Twitter Spaces on May 25, hosted by David Cash of Cash Labs. She touched on the industry “go-to” of traditional collecting and the experiences attached to them, distinguishing the different mindset one has if you approach art as if it were a theater or ballet performance – divided into “acts.” Thankfully, the digital art community is finally beginning to understand the value beyond a traditional mint, as the reveal is only a small component in an artwork’s journey of creating genuine impact and leaving a lasting legacy.  Through the fusion of code, choreography, and generative art, Human Unreadable is a perfect embodiment of evolving art that redefined what it means to pour one’s soul into a piece, while advocating for an emotionally-fueled NFT minting experience. Vulnerability and Meaningful Exploitation When it comes to injecting heart and soul into the project, Ti spoke to Hypemoon about the thematic element of vulnerability and exploitation that clearly defines the foundation of Human Unreadable: “Hero your voice, hero the concept. Avoid the temptation to hide behind the novelty of technology or market mechanisms. Avoid masking your voice or expression with what technology can do, but instead use technology to dig deeper into and/or expressing other selves – even if it feels risky, imperfect, and doesn’t fit into what people expect to encounter in a sea of polished digital personas.” It’s in these very moments that both Catherine and Ti embrace the reality of failure and/or exploitation and how to navigate those waters, which many come to fear and work to avoid. “That takes vulnerability and courage because there is a chance of failure or feeling exposed. What we do know for sure is tech doesn’t age well, but concept and honesty do,” Ti added.  When it comes to artists showcasing their work and putting themselves out there to such a large number of people, exploitation and how we perceive that type of public presentation can certainly change depending on the underlying motivations. “Unfortunately, the world is full of exploitative scenarios for artists, not just limited to Web3. Artists need to always remind themselves that they bring value to the table, and also keep that in mind when they see an ‘opportunity for artists’ to look closely in making sure it's not just an opportunity for people who don’t care about art to extract their value,” Ti says. In that context, she also emphasized the importance of artists knowing “when to be protective and guarded.” “At the same time, artists can’t and shouldn’t try to do everything themselves—it's not effective, it’s not good for the art and will cause burn out. Operator’s practice is highly collaborative, not just in the creative sense, but also in the operational sense. For us, we only work with kind people where there is high trust and honest communication. If there is respect, trust and an intimate understanding of the art practice, then there’s more room to be open with collaborators and partners which is essential for making exceptional things happen.” At the end of the day, both Ti and Catherine want collectors to embrace the beauty and nuance of "human messiness." “We want collectors to walk away with: a piece that reminds them of the beauty of complexity and human messiness, the feeling that vulnerability is not a weakness, excitement that they are at the beginning of choreography being collected as an art object, and curiosity to further explore movement and performance." In other news, read about AI startup Gensyn landing a $43 million USD funding round, led by a16z Click here to view full gallery at Hypemoon

Why ‘Human Unreadable’ Showcases Art Longevity Through an Emotionally-Driven Collection Experience

Over the course of nine months and pouring their life’s work into what has been one of Art Block’s most successful projects, Dejha Ti and Ania Catherine developed an on-chain generative choreography method that serves as the backbone to their now sold-out “Human Unreadable” digital art collection. 

Having minted out within 30 minutes, “Human Unreadable” is the brainchild of both Catherine and Ti, who have spent countless hours in the creation of a method that prioritizes “human messiness and chaos” within a highly mathematical and engineering heavy process.

Catherine and Ti are an award-winning experiential artist duo who create through their collective art practice, Operator, which they launched in 2016. 

As two “critical contemporary voices” on digital art’s international stages, the duo and ‘LGBT power couple’ welcome their expertises to collide in large scale conceptual works that are highly recognized for their nuanced integration of emerging technologies. 

Ti, whose background as an immersive artist and human-computer Interaction technologist, and Catherine’s as a choreographer and performance artists bring together two environments that showcase a beautiful harmony of our current digital infrastructure with that of Web3. 

The Berlin-based duo They have appeared on BBC Click, Bloomberg ART+TECHNOLOGY, Christie’s Art+Tech Summit, SCAD Museum of Art, MIT Open Doc Lab, Art Basel, and many more. 

Spanning across a three-act play – Reveal, Decipher, and Witness – Human Unreadable’s story unfolds over the course of several months, with the artwork reveal taking place this spring, the uncovering of the choreographies used to create the generative model at the end of June, and lastly, a live performance of those choreographies from the first 100 pieces in the collection (#2 to #101) later this year. 

In bringing the pieces of Human Unreadable to life, Ti and Catherine built a team of more than 25 people – from highly experienced engineers to professional dancers – to help give life to the choreography as it was combined with black-and-white portrait photos of them, X-ray shading, and generative glass objects. 

With choreography at the heart of Human Unreadable, Catherine and Li have proudly defended against ever wanting to separate the underlying choreography from the secondary token that's bound to the primary Art Blocks token, because it’s that choreographic score and unique sequence that generated the Art Blocks token to begin with. 

“Everyone assumes that the reveal of the artwork is the end of the story,” Catherine stated in a Twitter Spaces on May 25, hosted by David Cash of Cash Labs. She touched on the industry “go-to” of traditional collecting and the experiences attached to them, distinguishing the different mindset one has if you approach art as if it were a theater or ballet performance – divided into “acts.”

Thankfully, the digital art community is finally beginning to understand the value beyond a traditional mint, as the reveal is only a small component in an artwork’s journey of creating genuine impact and leaving a lasting legacy. 

Through the fusion of code, choreography, and generative art, Human Unreadable is a perfect embodiment of evolving art that redefined what it means to pour one’s soul into a piece, while advocating for an emotionally-fueled NFT minting experience.

Vulnerability and Meaningful Exploitation

When it comes to injecting heart and soul into the project, Ti spoke to Hypemoon about the thematic element of vulnerability and exploitation that clearly defines the foundation of Human Unreadable:

“Hero your voice, hero the concept. Avoid the temptation to hide behind the novelty of technology or market mechanisms. Avoid masking your voice or expression with what technology can do, but instead use technology to dig deeper into and/or expressing other selves – even if it feels risky, imperfect, and doesn’t fit into what people expect to encounter in a sea of polished digital personas.”

It’s in these very moments that both Catherine and Ti embrace the reality of failure and/or exploitation and how to navigate those waters, which many come to fear and work to avoid.

“That takes vulnerability and courage because there is a chance of failure or feeling exposed. What we do know for sure is tech doesn’t age well, but concept and honesty do,” Ti added. 

When it comes to artists showcasing their work and putting themselves out there to such a large number of people, exploitation and how we perceive that type of public presentation can certainly change depending on the underlying motivations.

“Unfortunately, the world is full of exploitative scenarios for artists, not just limited to Web3. Artists need to always remind themselves that they bring value to the table, and also keep that in mind when they see an ‘opportunity for artists’ to look closely in making sure it's not just an opportunity for people who don’t care about art to extract their value,” Ti says.

In that context, she also emphasized the importance of artists knowing “when to be protective and guarded.”

“At the same time, artists can’t and shouldn’t try to do everything themselves—it's not effective, it’s not good for the art and will cause burn out. Operator’s practice is highly collaborative, not just in the creative sense, but also in the operational sense. For us, we only work with kind people where there is high trust and honest communication. If there is respect, trust and an intimate understanding of the art practice, then there’s more room to be open with collaborators and partners which is essential for making exceptional things happen.”

At the end of the day, both Ti and Catherine want collectors to embrace the beauty and nuance of "human messiness."

“We want collectors to walk away with: a piece that reminds them of the beauty of complexity and human messiness, the feeling that vulnerability is not a weakness, excitement that they are at the beginning of choreography being collected as an art object, and curiosity to further explore movement and performance."

In other news, read about AI startup Gensyn landing a $43 million USD funding round, led by a16z

Click here to view full gallery at Hypemoon
عرض الترجمة
Are We Looking At AI All Wrong? Why We May Be Ready for the Next Stage of Computing to Help Us Be...As humans, symbolism is the key to understanding the world around us, it’s how we interpret objects, ideas, and the relationships between and among them.  We are wholly dependent upon analogy, which is what makes our current computing technology extremely convoluted, complex, and at this point in time, archaic.  The growing popularity of artificial intelligence (AI) and the use cases we are already seeing with OpenAI's ChatGPT aren't necessarily the best applications that go beyond mere "hype" and stock inflation. Under traditional computing, we don’t fully understand what these artificial neural networks (ANN) are doing or why they even work as well as they do. The utter lack of transparency also provides a major disadvantage in our understanding of how data is collected and analyzed to spit out the results we so desperately attach ourselves to that we come to label as “progress.” Consider the following example of an ANN that is able to distinguish “circles” and “squares” from one another.  One way to achieve that distinction is the obvious – if one output layer indicates a circle, and the other indicates a square.  But what if you wanted the ANN to discern that particular shape’s “color” – is it “red” or “blue”?  Since “color” is an entirely separate data set, it requires additional output neurons to be able to account for that feature in the final output. In this case, there would need to be four output neurons – one each for the blue circle, blue square, red circle, and red square.  Now, what if we wanted a computation that also considered additional information, such as “size” or “position/location”?  More features mean more neurons that need to account for each possibility associated in defining that particular feature (or combination of features) with the “circle” and the “square”.  In other words, it becomes incredibly complex.  Bruno Olshausen, a neuroscientist at the University of California, Berkeley, recently spoke to this need for having a neuron for every possible combination of features. “This can’t be how our brains perceive the natural world, with all its variations. You have to propose…a neuron for all combinations,” he said, further explaining that we in essence, would need “a purple Volkswagen detector” or something so obscure to account for every possible combination of information we are hoping to consider in any given experiment. Enter ‘hyperdimensional computing’. What Is ‘Hyperdimensional Computing’? The heart of hyperdimensional computing is the algorithm’s ability to decipher specific pieces of information from complex images (think of metadata) and then represent that collective information as a single entity, known as a “hyperdimensional vector.” Unlike traditional computing, hyperdimensional computing allows us to solve problems symbolically and in a sense, be able to efficiently and accurately “predict” the outcome of a particular problem based on the data contained in the hyperdimensional vector.  What Olshausen argues, among other colleagues, is that information in the brain is represented by the activity of a ton of neurons, making the perception of our fictitious “purple Volkswagen” impossible to be contained by a single neuron’s actions, but instead, through thousands of neurons that, collectively, come to comprise a purple Volkswagen. With the same set of neurons acting differently, we could see an entirely different concept or result, such as a pink Cadillac.  The key, according to a recent discussion in WIRED, is that each piece of information, such as the idea of a car or its make, model, color, or all of them combined, is represented as a single entity – a hyperdimensional vector or hypervector. A “vector” is just an ordered array of numbers – 1, 2, 3, etc – where a 3D vector consists of three numbers – the x, y, and z coordinates of an exact point in 3D space. A “hypervector”, on the other hand, could be an array of thousands or hundreds of thousands of numbers that represent a point in that amount of dimensional space. For example, a hypervector that represents an array of 10,000 numbers represents a point in 10,000-dimensional space.  This level of abstraction affords us the flexibility and ability to evolve modern computing and harmonize it with emerging technologies, such as artificial intelligence (AI).  “This is the thing that I’ve been most excited about, practically in my entire career,” Olshausen said. To him and many others, hyperdimensional computing promises a new world in which computing is efficient and robust and machine-made decisions are entirely transparent. Transforming ‘Metadata’ Into Hyperdimensional Algorithms to Generate Complex Results The underlying algebra tells us why the system chose that particular answer, which cannot be said for traditional neural networks.  In developing hybrid systems in which these neural networks can map things out IRL to hypervectors, and then allow for hyperdimensional algebra to take over is the crux of how AI should be used to actually empower us to better understand the world around us. “This is what we should expect of any AI system,” says Olshausen. “We should be able to understand it just like we understand an airplane or a television set.” Going back to the example with “circles” and “squares” and applying it to high-dimension spaces, we need vectors to represent the variables of “shape” and “color” – but also, we need vectors to represent the values that can be assigned to the variables – “CIRCLE”, “SQUARE”, “BLUE”, and “RED.” Most importantly, these vectors must be distinct enough to actually quantify these variables.  Now, let’s turn attention to Eric Weiss, a student of Olshausen, who in 2015, demonstrated one aspect of hyperdimensional computing’s unique abilities in how to best represent a complex image as a single hyperdimensional vector that contains information about ALL the objects in the image – colors, positions, sizes.  In other words, an extremely advanced representation of an image’s metadata.  “I practically fell out of my chair,” Olshausen said. “All of a sudden, the light bulb went on.” At that moment, more teams began focusing their efforts on developing “hyperdimensional algorithms” to replicate the “simple” tasks that deep neural networks had already been engaged in two decades prior – such as classifying images.  Creating a ‘Hypervector’ For Each Image For example, if you were to take an annotated data set that consists of images of handwritten digits, this hyperdimensional algorithm would analyze the specific features of each image, creating a “hypervector” for each image. Creating a “Class” of Hypervectors for Each Digit From there, the algorithm would add the hypervectors for all images of “zero” to create a hypervector for the “idea of zero,” and repeats that for all the digits, generating 10 “class” hypervectors – one for each digit.  Those stored classes of hypervectors are now measured and analyzed against the hypervector created for a new, unlabeled image for the purpose of the algorithm determining which digit most closely matches the new image (based on the predetermined class of hypervectors for each digit).  IBM Research Dives In In March, Abbas Rahimi and two colleagues at IBM Research in Zurich used hyperdimensional computing with neural networks to solve a classic problem in abstract visual reasoning – something that has presented a significant challenge for typical ANNs, and even some humans.  The team first created a “dictionary” of hypervectors to represent the objects in each image, where each hypervector in the dictionary represented a specific object and some combination of its attributes.  From there, the team trained a neural network to examine an image to generate a bipolar hypervector – where a particular attribute or element can be a +1 or -1.  “You guide the neural network to a meaningful conceptual space,” Rahimi said. The value here is that once the network has generated hypervectors for each of the context images, and for each candidate for the blank slot, another algorithm is used to analyze the hypervectors to create “probability distributions” for a number of objects in the image. In other words, algebra is able to be used to predict the most likely candidate image to fill the vacant slot. And the team’s approach yielded a near 88 percent accuracy on one set of problems, where neural network-only solutions were less than 61 percent accurate. We’re Still In Infancy Despite its many advantages, hyperdimensional computing is still very much in its infancy and requires testing against real-world problems and at much bigger scales than what we’ve seen so far – for example, the need to efficiently search over 1 billion items or results and find a specific result.  Ultimately, this will come with time, but it does present the questions of where and how we apply and integrate the use of artificial intelligence.  Read about how a 40-minute church service, powered by AI, drew in over 300 attendees in Germany as a first-of-its-kind experiment. Click here to view full gallery at Hypemoon

Are We Looking At AI All Wrong? Why We May Be Ready for the Next Stage of Computing to Help Us Be...

As humans, symbolism is the key to understanding the world around us, it’s how we interpret objects, ideas, and the relationships between and among them. 

We are wholly dependent upon analogy, which is what makes our current computing technology extremely convoluted, complex, and at this point in time, archaic. 

The growing popularity of artificial intelligence (AI) and the use cases we are already seeing with OpenAI's ChatGPT aren't necessarily the best applications that go beyond mere "hype" and stock inflation.

Under traditional computing, we don’t fully understand what these artificial neural networks (ANN) are doing or why they even work as well as they do. The utter lack of transparency also provides a major disadvantage in our understanding of how data is collected and analyzed to spit out the results we so desperately attach ourselves to that we come to label as “progress.”

Consider the following example of an ANN that is able to distinguish “circles” and “squares” from one another. 

One way to achieve that distinction is the obvious – if one output layer indicates a circle, and the other indicates a square. 

But what if you wanted the ANN to discern that particular shape’s “color” – is it “red” or “blue”? 

Since “color” is an entirely separate data set, it requires additional output neurons to be able to account for that feature in the final output. In this case, there would need to be four output neurons – one each for the blue circle, blue square, red circle, and red square. 

Now, what if we wanted a computation that also considered additional information, such as “size” or “position/location”? 

More features mean more neurons that need to account for each possibility associated in defining that particular feature (or combination of features) with the “circle” and the “square”. 

In other words, it becomes incredibly complex. 

Bruno Olshausen, a neuroscientist at the University of California, Berkeley, recently spoke to this need for having a neuron for every possible combination of features.

“This can’t be how our brains perceive the natural world, with all its variations. You have to propose…a neuron for all combinations,” he said, further explaining that we in essence, would need “a purple Volkswagen detector” or something so obscure to account for every possible combination of information we are hoping to consider in any given experiment.

Enter ‘hyperdimensional computing’.

What Is ‘Hyperdimensional Computing’?

The heart of hyperdimensional computing is the algorithm’s ability to decipher specific pieces of information from complex images (think of metadata) and then represent that collective information as a single entity, known as a “hyperdimensional vector.”

Unlike traditional computing, hyperdimensional computing allows us to solve problems symbolically and in a sense, be able to efficiently and accurately “predict” the outcome of a particular problem based on the data contained in the hyperdimensional vector. 

What Olshausen argues, among other colleagues, is that information in the brain is represented by the activity of a ton of neurons, making the perception of our fictitious “purple Volkswagen” impossible to be contained by a single neuron’s actions, but instead, through thousands of neurons that, collectively, come to comprise a purple Volkswagen.

With the same set of neurons acting differently, we could see an entirely different concept or result, such as a pink Cadillac. 

The key, according to a recent discussion in WIRED, is that each piece of information, such as the idea of a car or its make, model, color, or all of them combined, is represented as a single entity – a hyperdimensional vector or hypervector.

A “vector” is just an ordered array of numbers – 1, 2, 3, etc – where a 3D vector consists of three numbers – the x, y, and z coordinates of an exact point in 3D space.

A “hypervector”, on the other hand, could be an array of thousands or hundreds of thousands of numbers that represent a point in that amount of dimensional space. For example, a hypervector that represents an array of 10,000 numbers represents a point in 10,000-dimensional space. 

This level of abstraction affords us the flexibility and ability to evolve modern computing and harmonize it with emerging technologies, such as artificial intelligence (AI). 

“This is the thing that I’ve been most excited about, practically in my entire career,” Olshausen said. To him and many others, hyperdimensional computing promises a new world in which computing is efficient and robust and machine-made decisions are entirely transparent.

Transforming ‘Metadata’ Into Hyperdimensional Algorithms to Generate Complex Results

The underlying algebra tells us why the system chose that particular answer, which cannot be said for traditional neural networks. 

In developing hybrid systems in which these neural networks can map things out IRL to hypervectors, and then allow for hyperdimensional algebra to take over is the crux of how AI should be used to actually empower us to better understand the world around us.

“This is what we should expect of any AI system,” says Olshausen. “We should be able to understand it just like we understand an airplane or a television set.”

Going back to the example with “circles” and “squares” and applying it to high-dimension spaces, we need vectors to represent the variables of “shape” and “color” – but also, we need vectors to represent the values that can be assigned to the variables – “CIRCLE”, “SQUARE”, “BLUE”, and “RED.”

Most importantly, these vectors must be distinct enough to actually quantify these variables. 

Now, let’s turn attention to Eric Weiss, a student of Olshausen, who in 2015, demonstrated one aspect of hyperdimensional computing’s unique abilities in how to best represent a complex image as a single hyperdimensional vector that contains information about ALL the objects in the image – colors, positions, sizes. 

In other words, an extremely advanced representation of an image’s metadata. 

“I practically fell out of my chair,” Olshausen said. “All of a sudden, the light bulb went on.”

At that moment, more teams began focusing their efforts on developing “hyperdimensional algorithms” to replicate the “simple” tasks that deep neural networks had already been engaged in two decades prior – such as classifying images. 

Creating a ‘Hypervector’ For Each Image

For example, if you were to take an annotated data set that consists of images of handwritten digits, this hyperdimensional algorithm would analyze the specific features of each image, creating a “hypervector” for each image.

Creating a “Class” of Hypervectors for Each Digit

From there, the algorithm would add the hypervectors for all images of “zero” to create a hypervector for the “idea of zero,” and repeats that for all the digits, generating 10 “class” hypervectors – one for each digit. 

Those stored classes of hypervectors are now measured and analyzed against the hypervector created for a new, unlabeled image for the purpose of the algorithm determining which digit most closely matches the new image (based on the predetermined class of hypervectors for each digit). 

IBM Research Dives In

In March, Abbas Rahimi and two colleagues at IBM Research in Zurich used hyperdimensional computing with neural networks to solve a classic problem in abstract visual reasoning – something that has presented a significant challenge for typical ANNs, and even some humans. 

The team first created a “dictionary” of hypervectors to represent the objects in each image, where each hypervector in the dictionary represented a specific object and some combination of its attributes. 

From there, the team trained a neural network to examine an image to generate a bipolar hypervector – where a particular attribute or element can be a +1 or -1. 

“You guide the neural network to a meaningful conceptual space,” Rahimi said.

The value here is that once the network has generated hypervectors for each of the context images, and for each candidate for the blank slot, another algorithm is used to analyze the hypervectors to create “probability distributions” for a number of objects in the image.

In other words, algebra is able to be used to predict the most likely candidate image to fill the vacant slot. And the team’s approach yielded a near 88 percent accuracy on one set of problems, where neural network-only solutions were less than 61 percent accurate.

We’re Still In Infancy

Despite its many advantages, hyperdimensional computing is still very much in its infancy and requires testing against real-world problems and at much bigger scales than what we’ve seen so far – for example, the need to efficiently search over 1 billion items or results and find a specific result. 

Ultimately, this will come with time, but it does present the questions of where and how we apply and integrate the use of artificial intelligence. 

Read about how a 40-minute church service, powered by AI, drew in over 300 attendees in Germany as a first-of-its-kind experiment.

Click here to view full gallery at Hypemoon
عرض الترجمة
Artificial Intelligence Delivers an Experimental Lutheran Church Service to 300 AttendeesAn artificial intelligence (AI) powered church service drew in just over 300 attendees in Germany. As a first-of-its-kind experiment, the 40-minute sermon revealed several useful applications of AI technology and several significant shortcomings. Filled to capacity, St. Paul's church in Fuerth, Germany, became the first to hold an "experimental Lutheran Church service," with 98 percent of the service being organized by ChatGPT and led by four AI avatars, according to comments made by theologian and philosopher Jonas Simmerlein to the Associated Press. Worship through #AI or #WorshipAI ? Church service in Germany draws 300+ people through 40 minutes of prayer, music, sermons and blessings.#ChatGPT generated speeches and AI pastors calls in to question the use of AI in the context of spiritualityhttps://t.co/x8iQkAoD7J pic.twitter.com/s19gnHYZLw — Neutron ? (@jeffrey_neutron) June 12, 2023 29-year-old Simmerlein explained, "I conceived this service — but actually, I rather accompanied it because I would say about 98% comes from the machine." As part of the convention of Protestants in Germany, the AI church service generated significant interest, resulting in a long queue forming outside the neo-Gothic church building before it commenced. The convention, known as Deutscher Evangelischer Kirchentag, is a biennial event that attracts tens of thousands of believers who gather to pray, sing, and engage in discussions about their faith and global issues. This year's theme, "Now is the time," provided the foundation for Simmerlein's request to ChatGPT to develop a sermon. The AI-generated service touched upon leaving the past behind, addressing present challenges, overcoming the fear of death, and maintaining unwavering trust in Jesus Christ -- presented by four different avatars, two young women and two young men. Towards the beginning of the service, viewers seemed to be intrigued or perhaps just curious as to what an AI service might look and sound like -- as the first avatar said "Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year’s convention of Protestants in Germany." However, as the sermon went on, the audience expressed mixed feelings, some laughing at emotionless platitudes that the avatars shared in soulless monotonous voices. Others, like Heiderose Schmidt, a 54-year-old IT professional, shared that she was first excited but grew increasingly off-put as the service went on. She explained that "There was no heart and no soul," adding that "The avatars showed no emotions at all, had no body language, and were talking so fast and monotonously that it was very hard for me to concentrate on what they said." As the service finished, there appeared to be a consensus that while AI in religion might offer potential benefits, like increased accessibility for those with physical or language barriers, it also poses potential risks, as AI can be deceiving and may inadvertently promote biased or one-sided perspectives -- not to mention the lack of spirituality, which many congregation members lean on. Simmerlein emphasized that his intention is not to replace religious leaders with AI but rather to aid them in their work. He proposed using AI to generate ideas for sermons or streamline sermon preparation, allowing pastors to focus on individual spiritual guidance for their congregations. Despite best intentions, the experiment revealed the limitations of AI in religious settings. Unlike human pastors who interact and connect with their congregations on a personal level, AI lacks the ability to respond to laughter or other reactions, highlighting the importance of human presence and understanding within religious communities. In other news, AI is being reimagined by scientists using 'hyperdimensional computing.' Click here to view full gallery at Hypemoon

Artificial Intelligence Delivers an Experimental Lutheran Church Service to 300 Attendees

An artificial intelligence (AI) powered church service drew in just over 300 attendees in Germany. As a first-of-its-kind experiment, the 40-minute sermon revealed several useful applications of AI technology and several significant shortcomings.

Filled to capacity, St. Paul's church in Fuerth, Germany, became the first to hold an "experimental Lutheran Church service," with 98 percent of the service being organized by ChatGPT and led by four AI avatars, according to comments made by theologian and philosopher Jonas Simmerlein to the Associated Press.

Worship through #AI or #WorshipAI ?

Church service in Germany draws 300+ people through 40 minutes of prayer, music, sermons and blessings.#ChatGPT generated speeches and AI pastors calls in to question the use of AI in the context of spiritualityhttps://t.co/x8iQkAoD7J pic.twitter.com/s19gnHYZLw

— Neutron ? (@jeffrey_neutron) June 12, 2023

29-year-old Simmerlein explained, "I conceived this service — but actually, I rather accompanied it because I would say about 98% comes from the machine."

As part of the convention of Protestants in Germany, the AI church service generated significant interest, resulting in a long queue forming outside the neo-Gothic church building before it commenced.

The convention, known as Deutscher Evangelischer Kirchentag, is a biennial event that attracts tens of thousands of believers who gather to pray, sing, and engage in discussions about their faith and global issues. This year's theme, "Now is the time," provided the foundation for Simmerlein's request to ChatGPT to develop a sermon.

The AI-generated service touched upon leaving the past behind, addressing present challenges, overcoming the fear of death, and maintaining unwavering trust in Jesus Christ -- presented by four different avatars, two young women and two young men.

Towards the beginning of the service, viewers seemed to be intrigued or perhaps just curious as to what an AI service might look and sound like -- as the first avatar said "Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year’s convention of Protestants in Germany."

However, as the sermon went on, the audience expressed mixed feelings, some laughing at emotionless platitudes that the avatars shared in soulless monotonous voices. Others, like Heiderose Schmidt, a 54-year-old IT professional, shared that she was first excited but grew increasingly off-put as the service went on.

She explained that "There was no heart and no soul," adding that "The avatars showed no emotions at all, had no body language, and were talking so fast and monotonously that it was very hard for me to concentrate on what they said."

As the service finished, there appeared to be a consensus that while AI in religion might offer potential benefits, like increased accessibility for those with physical or language barriers, it also poses potential risks, as AI can be deceiving and may inadvertently promote biased or one-sided perspectives -- not to mention the lack of spirituality, which many congregation members lean on.

Simmerlein emphasized that his intention is not to replace religious leaders with AI but rather to aid them in their work. He proposed using AI to generate ideas for sermons or streamline sermon preparation, allowing pastors to focus on individual spiritual guidance for their congregations.

Despite best intentions, the experiment revealed the limitations of AI in religious settings. Unlike human pastors who interact and connect with their congregations on a personal level, AI lacks the ability to respond to laughter or other reactions, highlighting the importance of human presence and understanding within religious communities.

In other news, AI is being reimagined by scientists using 'hyperdimensional computing.'

Click here to view full gallery at Hypemoon
عرض الترجمة
PUMA Continues Web3 Exploration With New Metaverse ExperiencePUMA has revealed its latest exploration of the metaverse and NFT space, just a week after its announced Web3 collaboration with LaMelo Ball and NFT brand Gutter Cat Gang. This time, through an immersive digital world titled Black Station 2, visitors are invited to participate in several retail offerings, including two digital wearables and a physical sneaker. Black Station is now LIVE! ⚡️ PUMA’s digital experience reveals new limited edition shoes in an entirely new light... Explore UNKAI & UNTER for yourself and discover the mysteries of these worlds ? ENTER EXPERIENCE: https://t.co/5EXphtTxSA pic.twitter.com/lRpOFw0nqe — PUMA.eth (@PUMA) June 13, 2023 This expansion of its existing digital retail destination, Black Station, sees a new metaverse-based platform that aims to reinvent the omnichannel shopping experience by merging the digital and physical worlds, allowing users to purchase exciting phygital footwear. Inside Black Station 2, users will find immersive shopping experiences, that at the time of writing feature two distinct "worlds," each created to highlight various aspects of PUMA's footwear designs. The first world, named Unkai, draws inspiration from the vibrant city of Shibuya in Japan, incorporating its lively colors and energetic elements into the footwear. The second world, Unter, takes inspiration from Berlin's underground club scene, with design elements reflecting the city's renowned club culture. To kick things off, the inaugural release will feature the highly anticipated Fast-RB. The footwear combines PUMA's pinnacle running technologies, featuring four strategically placed NITRO pods and three PWRPLATES -- delivering a bouncy running sensation that PUMA says is "unlike anything else on the market." Notable features of the Fast-RB include INITRO, an innovative Nitrogen-infused foam technology that offers exceptional responsiveness in an incredibly lightweight package, and PWRPLATE, designed to stabilize NITRO midsoles while maximizing energy transfer for powerful propulsion. To encourage wider adoption of PUMA's Web3 spaces, the company is now accepting credit card payments in addition to cryptocurrency for purchases within Black Station 2 -- provided by payment solutions provider MoonPay. "We're thrilled to invite our community into these new worlds that provide an unparalleled shopping experience," stated Ivan Dashkov, Head of Web3 at PUMA, adding that "PUMA aims to meet our community where it shops while also exploring new and exciting opportunities within cryptocurrency and the metaverse." Black Station 2 is live now for visitors to explore and for those looking to purchase the Fast-RB, access passes can be found on secondary markets. In other news, adidas Originals and acclaimed artist FEWOCiOUS unveil an exciting collaboration. Click here to view full gallery at Hypemoon

PUMA Continues Web3 Exploration With New Metaverse Experience

PUMA has revealed its latest exploration of the metaverse and NFT space, just a week after its announced Web3 collaboration with LaMelo Ball and NFT brand Gutter Cat Gang.

This time, through an immersive digital world titled Black Station 2, visitors are invited to participate in several retail offerings, including two digital wearables and a physical sneaker.

Black Station is now LIVE! ⚡️

PUMA’s digital experience reveals new limited edition shoes in an entirely new light...

Explore UNKAI & UNTER for yourself and discover the mysteries of these worlds ?

ENTER EXPERIENCE: https://t.co/5EXphtTxSA pic.twitter.com/lRpOFw0nqe

— PUMA.eth (@PUMA) June 13, 2023

This expansion of its existing digital retail destination, Black Station, sees a new metaverse-based platform that aims to reinvent the omnichannel shopping experience by merging the digital and physical worlds, allowing users to purchase exciting phygital footwear.

Inside Black Station 2, users will find immersive shopping experiences, that at the time of writing feature two distinct "worlds," each created to highlight various aspects of PUMA's footwear designs.

The first world, named Unkai, draws inspiration from the vibrant city of Shibuya in Japan, incorporating its lively colors and energetic elements into the footwear. The second world, Unter, takes inspiration from Berlin's underground club scene, with design elements reflecting the city's renowned club culture.

To kick things off, the inaugural release will feature the highly anticipated Fast-RB. The footwear combines PUMA's pinnacle running technologies, featuring four strategically placed NITRO pods and three PWRPLATES -- delivering a bouncy running sensation that PUMA says is "unlike anything else on the market."

Notable features of the Fast-RB include INITRO, an innovative Nitrogen-infused foam technology that offers exceptional responsiveness in an incredibly lightweight package, and PWRPLATE, designed to stabilize NITRO midsoles while maximizing energy transfer for powerful propulsion.

To encourage wider adoption of PUMA's Web3 spaces, the company is now accepting credit card payments in addition to cryptocurrency for purchases within Black Station 2 -- provided by payment solutions provider MoonPay.

"We're thrilled to invite our community into these new worlds that provide an unparalleled shopping experience," stated Ivan Dashkov, Head of Web3 at PUMA, adding that "PUMA aims to meet our community where it shops while also exploring new and exciting opportunities within cryptocurrency and the metaverse."

Black Station 2 is live now for visitors to explore and for those looking to purchase the Fast-RB, access passes can be found on secondary markets.

In other news, adidas Originals and acclaimed artist FEWOCiOUS unveil an exciting collaboration.

Click here to view full gallery at Hypemoon
سجّل الدخول لاستكشاف المزيد من المُحتوى
استكشف أحدث أخبار العملات الرقمية
⚡️ كُن جزءًا من أحدث النقاشات في مجال العملات الرقمية
💬 تفاعل مع صنّاع المُحتوى المُفضّلين لديك
👍 استمتع بالمحتوى الذي يثير اهتمامك
البريد الإلكتروني / رقم الهاتف
خريطة الموقع
تفضيلات ملفات تعريف الارتباط
شروط وأحكام المنصّة