May 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements.
Please share your thoughts, but don't share this link on social media, for now.
We only have recent episodes right now, and there are some false positives. Will be fixed soon!
開頭先來跟大家分享這次相對沒那麼 drama 但還是很有趣的二寶生產過程,接著來聊聊這週剛發生的 AI 聽證會,國會議員們謹記社群媒體的教訓打算超前部署,我們會聊到為什麼 AI 立法特別困難、AI 著作權、2024 總統大選會因為 AI 有什麼改變、對人類工作的影響以及現在連直播都可以 Deepfake 了!
如果我們的 Podcast 有帶給你歡笑還有知識的話,歡迎支持我們成為贊助夥伴,一個月一杯星巴克的價錢,幫助我們持續創造優質的內容!
矽谷輕鬆談傳送門 ➡️ https://linktr.ee/jktech
(00:55) 二寶生產過程
(17:07) Sam Altman 上 AI 聽證會
(21:13) 直播也可以 Deepfake
(24:10) 為什麼 AI 立法很困難?
(31:54) AI 著作權
(36:19) 一對一客製化的假消息
En el episodio 32, Lucas Lopatin y Cristóbal Perdomo te adentran en los temas más candentes del mundo de los negocios. Exploramos el fascinante mundo del equity de AI y la visión de Sam Altman sobre el futuro de la inteligencia artificial. También conversamos sobre la gran pregunta: Dónde se encuentran las valuaciones? desde el 2019 hasta ahora que ha cambiado?
Además, sumérgete en la idea de vivir tu mejor vida y alcanzar el éxito personal y profesional con el concepto de "Your Best Life". Por último, exploramos el emocionante campo del Impact Investing y descubre cómo generar un impacto positivo en el mundo mientras obtienes rendimientos financieros. ¡Prepárate para una dosis de conocimiento y motivación para emprender en este episodio imperdible del podcast!
__
Links Mencionados:
https://www.amazon.com/House-Morgan-American-Banking-Dynasty/dp/0802144659
https://twitter.com/erica_wenger/status/1626343733875843072?s=20
__
02:14 Impact Investing
05:21 Misión de Lucas
06:24 El impacto
8:35 Climat Tech
10:20 Un Hack Increible
14:17 Sam Altman
16:30 Signal
20:21 Jeff Bezos
24:20 Cómo están las valuaciones hoy en día?
30:38 Los disruptores
35:00 Las modas y las Olas
36:30 Tu mejor vida
44:20 beneficios del Dinero
47:10 Messi
__
Conocé a Bosco Soler!
Tenes alguna pregunta? Escribinos y seguinos en:
Twitter: @CristobaPerdomo y @llopatin
Linkedin: Lucas Lopatin y Cristobal Perdomo y
Visitá:
--- Send in a voice message: https://podcasters.spotify.com/pod/show/indie-vs-unicornio/messageINSTAGRAM: https://www.instagram.com/throughtheweb.podcast/
WATCH THE VIDEO: https://youtu.be/63Y_3NS-kDw
Follow us on Twitter to engage with our work: https://twitter.com/throughtheweb
00:00 - Hook
00:15 - THE NEW INTRO!
00:48 - Linus steps down as CEO
04:22 - Dagogo's Broken Mac
06:03 - Quantum Computing
10:43 - Retrospective Outlook
14:26 - Tawsif is doing a 12K
15:53 - Tiktok banned in Montana
23:55 - AR/VR for the workspace?
28:49 - Looking back at the pandemic
33:11 - Elon says WFH is "immoral"
37:20 - Tesla to advertise for the first time
43:04 - Sam Altman and OpenAI Regulation
46:04 - Meta's Imagebind AI
47:53 - Wendy's AI to take drive through orders
49:07 - Replying to comments
56:39 - Outro
Produced by: Dagogo Altraide (ColdFusion), Tawsif Akkas
Shot and edited by: Brayden Laffrey
Sam Altman received a warmer welcome from lawmakers than other tech CEOs at a recent Senate hearing
In this week's episode, Logan and Evan discuss AppHarvest's latest SEC filings, TikTok's ban in Montana, and Sam Altman's testimony to the Senate about the dangers of AI and calls for regulation.
AppHarvest, a vertically integrated indoor farming company, filed its 8-K and 10-Q with the SEC on May 10th. The 8-K report details unscheduled material events, while the 10-Q report provides a quarterly update on the company's financial position. The SEC filings paint a grim picture for AppHarvest, which has been struggling financially since its IPO in 2021. The company has defaulted on loans, its crop yields have been impacted by pests and a Listeria outbreak, and it has received a delisting notice from the Nasdaq. AppHarvest is now expected to run out of operating capital in Q3.
In other news, Montana has become the first state to ban TikTok, a popular social media app owned by Chinese company ByteDance. Montana Governor Greg Gianforte said he is banning TikTok to protect his state's residents from China, which he claims is using the app to spy on Americans. TikTok has denied these allegations, and it has said it will challenge Montana's ban in court.
Finally, Sam Altman, the CEO of OpenAI, testified before the Senate Judiciary Committee on May 17th about the dangers of AI and the need for regulation. Altman warned that AI could be used to create autonomous weapons systems, spread misinformation, and undermine democracy. He called for AI businesses to be subject to licensing and testing requirements, and he said that firms like OpenAI should be independently audited.
Visit us at MiddleTech.com
Follow Us
Logan's Twitter
Evan's Twitter
Middle Tech is proud to be supported by:
It's Monday, May 22nd — Here are the top AI, tech and ecommerce stories from last week and why that matter this week...
- Shopify Beta Launches World's First Global Entrepreneurship Index
- Shopify Checkout is the best-converting in the world. Here's why.
- Sam Altman says a government agency should license AI companies — and punish them if they do wrong
- OpenAI Introduces ChatGPT App for the iPhone
- Introducing the ChatGPT app for iOS
- ICYMI: Instagram's New App Could Be Here By June
If you liked this episode, please rate, review, or subscribe to us on whatever platform you’re listening from.
Rebuy is the fastest-growing app for Shopify Plus and the trusted commerce-AI platform for thousands of top brands across the world.
Learn how Rebuy’s intelligent shopping experiences can boost your business’ AOV, conversions and LTV by booking a demo right here.
《M觀點》專案優惠-10.3 吋 mooInk Pro 2+折疊皮套套組 + 加贈好書五選二:https://readmoo.pse.is/4xsbq4
《M觀點》專案優惠-10.3 吋 mooInk Pro 2 + 加贈好書五選二:https://readmoo.pse.is/4y5ean
---
本集主題 - Sam Altman 到國會、純 64 位元 x86-S、Tesla Optimus 進度
---
M觀點資訊
---
科技巨頭解碼: https://bit.ly/2XupBZa
M觀點 Telegram - https://t.me/miulaviewpoint
M觀點 IG - https://www.instagram.com/miulaviewpoint/
M觀點 Podcast - https://bit.ly/34fV7so
M觀點YouTube頻道訂閱 - https://bit.ly/2nxHnp9
M觀點粉絲團 - https://www.facebook.com/miulaperspective/
任何合作邀約請洽 miula@outlook.com
(0:00) Bestie intros!: Reddit Performance Reviews
(5:14) Quick AIS 2023 update
(6:27) AI Senate hearing: Sam Altman playing chess with regulators, open-source viability
(21:25) Regulatory capture angle, "preemptive regulation," how AI relates to nuclear, insider insights from the White House's AI summit
(43:33) Elon hires new Twitter CEO
(48:46) Lina Khan moves to block Amgen's $27.8B acquisition of Horizon Therapeutics
(1:02:30) Apple's AR goggles are reportedly being unveiled at WWDC in early June, platform shift potential, how it deviates from Apple's prior launch strategies
(1:07:45) Residential and commercial real estate headwinds
(1:15:29) Unrest in SF and NYC after two recent tragic incidents ended in death
(1:24:20) Soros political motivations for backing progressive DAs
Follow the besties:
https://twitter.com/DavidSacks
Follow the pod:
https://twitter.com/theallinpod
https://linktr.ee/allinpodcast
Intro Music Credit:
https://twitter.com/yung_spielburg
Intro Video Credit:
https://twitter.com/TheZachEffect
Referenced in the show:
https://twitter.com/chamath/status/1658913074106236931
https://www.mosaicml.com/blog/mpt-7b
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
https://artificialintelligenceact.eu
https://futurism.com/the-byte/warren-buffett-ai-atom-bomb
https://www.ftc.gov/system/files/ftc_gov/pdf/2310037amgenhorizoncomplainttropi.pdf
https://sharegpt.com/c/O6OKM9D
https://www.wsj.com/articles/apple-mixed-reality-headset-9213ac1b
https://twitter.com/PalmerLuckey/status/1657828947877560323
https://twitter.com/DavidSacks/status/1658955195450286081
https://www.foxnews.com/us/witness-jordan-neely-chokehold-death-calls-daniel-penny-hero
https://www.reddit.com/r/nyc/comments/1njfls/try_to_stay_away_from_the_michael_jackson
https://en.wikipedia.org/wiki/1984_New_York_City_Subway_shooting
https://en.wikipedia.org/wiki/Guardian_Angels
https://www.messynessychic.com/2019/03/26/in-the-shadows-with-nycs-self-styled-guardian-angels
El invitado especial en la tertulia de esta semana es Jaime Novoa de Dealflow. Junto a él, comentamos las historias más relevantes de la semana.
Una vez más, Sam Altman y Elon Musk son protagonistas con Tesla y OpenAI. Por un lado, el primero testifica ante el Congreso americano y pide regulación en el campo de los modelos de lenguaje IA. Por otro, Elon se enfrenta públicamente a la compañía de Altman y crea nueva controversia con sus declaraciones.
Esto y más en la Tertulia de Itnig...
Síguenos en Twitter:
• Bernat Farrero: @bernatfarrero
• Jordi Romero: @jordiromero
• César Migueláñez: @heycesr
EVENTOS
Pitch to Investors (Todos los jueves 19h) - https://itnig.net/events/
Itnig Talks - https://youtube.com/playlist?list=PLs...
SOBRE ITNIG
Twitter - https://twitter.com/itnig
LinkedIn - https://es.linkedin.com/company/itnig
Instagram - https://www.instagram.com/itnig/
Newsletter - https://itnig.net/newsletter/
Web - https://itnig.net/
ESCUCHA NUESTRO PODCAST EN
Spotify: http://bit.ly/itnigspotify
️ Apple Podcast: http://bit.ly/itnigapple
Want to learn about the new ChatGPT plugins and other new updates that could affect for site owners?
Don't miss this week's episode of Niche Pursuits News!
Spencer and Jared kick things off by discussing new ChatGPT plugins brainstorming the various ways they can help users automate everything from email to SEO. Plus, the new opportunities for entrepreneurs with plugin ideas.
They also discuss OpenAI CEO Sam Altman's hearing before Congress earlier this week concerning the possible risks and implications of AI. A four and a half hour testimony covering everything from the dangers of misinformation to political fallout, with one conclusion being that 'AI is more dangerous than you think'.
But since Google Bard is now available to the public, the guys were able to test it out with some fun prompts including asking it to compare itself to ChatGPT with some surprising results.
They also unpack Google's upcoming Core Web Vitals metric InP, which will offer insights into the 'interaction to next paint' rather than just 'first input delay', which could change the game in user experience metrics.
Then it's onto the side hustles, where Spencer unveils his year in the making WordPress plugin, Rank Logic, a powerful tool for tracking keyword rankings and content performance which is now open for a limited number of beta testers.
Meanwhile, Jared introduces his new service on Weekend Growth that creates email newsletter funnels for niche websites and is already receiving orders and positive feedback!
And as always, they wind things down by looking at peculiar niche sites starting with Unnecessary Inventions. A site created by an inventive content creator who turned his unique thinking into a profitable business followed by millions of fans on YouTube and Instagram.
They also discuss Reality Steve, a site dedicated to spoiling the results of the dating reality show The Bachelor.
In both cases, the sites have sparked interesting careers for their creators but the guys ponder the various ways they could boost their monetization.
So, sit back and enjoy another episode of industry insights, personal projects, and peculiar niches to help you get ready for the weekend!
Be sure to get more content like this in the Niche Pursuits Newsletter Right Here: https://www.nichepursuits.com/newsletter
Want a Faster and Easier Way to Build Internal Links? Get $15 off Link Whisper with Discount Code "Podcast" on the Checkout Screen: https://www.nichepursuits.com/linkwhisper
EU regulators say 'game on' for the Microsoft-Activision deal, WeWork's CEO is stepping down, and the latest call for AI regulation is coming from inside the house. Kara and Scott discuss the weak plan to ban TikTok in Montana. And Elon Musk continues to Elon Musk, saying he’s willing to lose money to say what he wants to say. Also, a prediction about the writer's strike.
Listen to On with Kara Swisher's episode, "Will Elon Musk Turn Twitter Into Yahoo Mail?" here.
Send us your questions! Call 855-51-PIVOT or go to nymag.com/pivot.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Join Alex Tapscott and Andrew Young as they decode the world of DeFi. Listen in as they discuss a roundup of current macroeconomic themes, the looming debt ceiling situation in the United States, potential Bitcoin selling pressure resulting from the U.S. government’s Silk Road seizure, newfound popularity of Bitcoin ordinals, and all things OpenAI CEO Sam Altman’s crypto project, WorldCoin!
In Episode 15 we explore AI regulation, new job-stealing capabilities for AI, and the dangers of autonomous drones, while pondering how to advise students on future careers. We discuss OpenAI's call for regulation amid fears of losing their monopoly, the rise of AI in gaming creating interactive stories, and their anxiety over autonomous killer drones. We also cover the latest news including ChatGPT for iOS and the wide release of Plugins for ChatGPT PRO users.
Please consider leaving a review where you get your podcasts to help spread the word.
CHAPTERS
===
00:00 - Cold open
00:25 - Sam Altman Tries to Regulate AI & AI Regulation Senate Hearing
03:29 - OpenAI's Lack of Moat to Open Source LLMs & Regulation
13:32 - Spear Phishing Attacks with AI LLMs & Bad Actors Using AI
22:38 - ChatGPT for iOS now available in USA
26:40 - The Future of AI: AI will be Everywhere
29:30 - Which Jobs will be Displaced First and How?
34:47 - As AI Chain of Thought Reasoning Improves Will More Jobs Go?
40:32 - Connecting AI to Our World as a New Interface
44:37 - Excitement on the Future of AI Gaming
48:51 - Will Startup Teams Be Smaller Due to AI & AI Regulation
52:10 - Building Things with AI: Custom Software, Movies, Games
55:35 - Palmer Luckey Interview: Autonomous Decision-Making AI Drones
SOURCES:
===
https://www.theguardian.com/technology/2023/may/16/ceo-openai-chatgpt-ai-tech-regulations
https://www.theguardian.com/technology/2023/may/16/ceo-openai-chatgpt-ai-tech-regulations
https://apps.apple.com/app/openai-chatgpt/id6448311069
https://www.searchenginejournal.com/chatgpt-plugins-web-browsing-beta/486785/#close
https://arxiv.org/ftp/arxiv/papers/2305/2305.06972.pdf
https://twitter.com/CNBCOvertime/status/1657120760513679375
The man behind ChatGPT has warned the US Senate that there is an “urgent” need for AI regulation. What might that look like?
Listen to the Truth Tellers podcast: https://podfollow.com/truth-tellers
Tortoise is a newsroom devoted to slow journalism.
For early access and ad-free listening subscribe to Tortoise+ on Apple Podcasts or join Tortoise for £60 a year.
As a member you’ll also get our newsletters and tickets to live events. Just go to tortoisemedia.com/slowdown.
If you’d like to further support slow journalism and help us build a different kind of newsroom, do consider donating to Tortoise at tortoisemedia.com/support-us. Your contributions allow us to investigate, campaign and explore, and to build a newsroom that is responsible and sustainable.
Hosted on Acast. See acast.com/privacy for more information.
In a Congressional hearing, Altman, whose company created ChatGPT, and senators expressed their fears about how AI could “go quite wrong.”
We give Bard a test run. We discuss how it compares to ChatGPT.
We pass our eyes over some of the more interesting stories breaking in the world of AI, including will we see Tom Hanks in adult movies and how Sam Altman got on at the US Congress. The Pet Shop Boys got a mention and we looked at some of the intellectual property issues that are cropping up as AI is embraced by musicians and film makers.
Remember to check out www.artificiallyspeaking.org for more information on what we covered and to get access to lots more content.
Got any questions, comments or even want to just find out more about us? you can drop us an email at hello@artificiallyspeaking.org or click below to find us on Twitter
Open AI’s Sam Altman reassures Congress by saying “If this technology goes wrong, it can go quite wrong.” CISOs feel that their companies are “likely to be attacked.” And a database named Cockroach aims to solve the problems of moving between cloud providers.
We discussed Sam Altman's proposed solutions for regulating AI's impact on elections, as well as the exciting partnership between Anthropic and Zoom to build customer-centric AI products. We also highlighted the LLM University by Cohere, a comprehensive learning resource for anyone interested in NLP using language models. Finally, we delved into three research papers, including Optimizing Memory Mapping Using Deep Reinforcement Learning, Professional Certification Benchmark Dataset, and Symbol Tuning for ICL in Large Language Models.
Contact: sergi@earkind.com
Timestamps:
00:34 Introduction
01:46 Sam Altman is concerned about AI being used to compromise elections
03:03 Anthropic Announces Partnership and Investment from Zoom
05:43 The LLM University by Cohere — Your Go-To Learning Resource for NLP🎓
06:59 A repo to build a small junior AI developer
08:43 Fake sponsor: AccenTrick Energy Drink
10:57 Optimizing Memory Mapping Using Deep Reinforcement Learning
12:37 Professional Certification Benchmark Dataset: The First 500 Jobs For Large Language Models
14:47 Symbol tuning improves in-context learning in language models
17:10 Outro
Download Selfpause the AI Life Coach: https://Selfpause.com/AIBox
Get our Daily AI Newsletter: https://AIBox.ai
Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/
Follow me on Twitter: https://twitter.com/jaeden_ai
Whether you're an industry insider or simply curious about the power of AI, the AIBox newsletter has you covered. Each day, our team of expert researchers and writers curate the most important stories, ideas, and perspectives from the world of AI and ChatGPT.
Rachel joins Jason to discuss Caryn AI and its rise to popularity before explaining how GenZ uses AI in their everyday lives (1:11). They wrap up conversing about the hearings at the Senate regarding AI regulation and more (49:56).
(0:00) Rachel joins Jason
(1:11) Rachel breaks down CarynAI
(10:33) LinkedIn Jobs - Post your first job for free at https://linkedin.com/twist
(11:59) The ethics of Forever Voices and creating digital AI clones
(21:25) Notion - Apply for Notion for Startups at https://notion.com/jason
(22:40) How Gen Z is using AI, AI-powered filters, and Milli Vanilli
(32:01) Hampton - Join Hampton's community of high-growth founders today at https://joinhampton.com/twist
(33:20) Jason gives a history lesson, Rupert Murdoch's infamous trial and Snapchat's AI chatbot
(49:56) AI Senate hearings
FOLLOW Rachel: https://twitter.com/_rachelbraun
FOLLOW Jason: https://linktr.ee/calacanis
Subscribe to our YouTube to watch all full episodes:
https://www.youtube.com/channel/UCkkhmBWfS7pILYIk0izkc3A?sub_confirmation=1
FOUNDERS! Subscribe to the Founder University podcast:
https://podcasts.apple.com/au/podcast/founder-university/id1648407190
Rachel joins Jason to discuss Caryn AI and its rise to popularity before explaining how GenZ uses AI in their everyday lives (1:11). They wrap up conversing about the hearings at the Senate regarding AI regulation and more (49:56).
(0:00) Rachel joins Jason
(1:11) Rachel breaks down CarynAI
(10:33) LinkedIn Jobs - Post your first job for free at https://linkedin.com/twist
(11:59) The ethics of Forever Voices and creating digital AI clones
(21:25) Notion - Apply for Notion for Startups at https://notion.com/jason
(22:40) How Gen Z is using AI, AI-powered filters, and Milli Vanilli
(32:01) Hampton - Join Hampton's community of high-growth founders today at https://joinhampton.com/twist
(33:20) Jason gives a history lesson, Rupert Murdoch's infamous trial and Snapchat's AI chatbot
(49:56) AI Senate hearings
FOLLOW Rachel: https://twitter.com/_rachelbraun
FOLLOW Jason: https://linktr.ee/calacanis
Subscribe to our YouTube to watch all full episodes:
https://www.youtube.com/channel/UCkkhmBWfS7pILYIk0izkc3A?sub_confirmation=1
FOUNDERS! Subscribe to the Founder University podcast:
https://podcasts.apple.com/au/podcast/founder-university/id1648407190
Topic Summary:
• Lex Fridman
• Wolfram Alpha
• ChatGPT
• Today in tech history
• AutoGPT
• Generative AI vs AI
• Google and Twitter purging inactive accounts
• Sam Altman testifies in Congress
Sam Altman Crypto Worldcoin Project Eyes $100 Million Goal S5 Ep170
patreon.com/roadtoempire
Kraken Invite Link: https://kraken.app.link/eG5vQaYUlyb
Learn Copywriting Basics: https://bit.ly/3xgH8DG
LinkedIn Pocket Guide: 6 Figures In 6 Weeks eBook: https://bit.ly/3asTO1y
--- Support this podcast: https://podcasters.spotify.com/pod/show/roadtoempire/supportFOR IMMEDIATE RELEASE
Altman engages with Atlanta’s Black business leaders, public officials, and HBCU students.
ATLANTA (May 15, 2023) – Operation HOPE recently partnered with Clark Atlanta University (CAU) to host two events focused on “The Future of Artificial Intelligence” with Sam Altman, Open AI Founder and ChatGPT Creator. The conversations were led by Operation HOPE Founder, Chairman, and CEO John Hope Bryant and featured the President of Clark Atlanta University, Dr. George T. French, Jr.
Held on CAU’s campus, the first event provided Atlanta’s most prominent Black leaders from the public and private sectors an opportunity to engage with Altman and discuss pressing issues around artificial intelligence (AI). The second discussion gave local HBCU and Atlanta-based college students the same opportunity.
Altman, a billionaire tech pioneer, shared how he believes AI can positively impact lives and create new economic opportunities for communities of color, particularly among students at Historically Black Colleges and Universities (HBCUs). The standing-room-only event included representatives from government, technology, non-profit, education, and the creative industries.
In 2015, Altman co-founded OpenAI, a nonprofit artificial intelligence research and deployment company with the stated mission “to ensure that artificial general intelligence – highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity.” In partnership with Operation HOPE, serial entrepreneur Altman has committed to making AI a force for good by stimulating economic growth, increasing productivity at lower costs, and stimulating job creation.
“The promise of an economic boost via machine learning is understandably seductive, but if we want to ensure AI technology has a positive impact, we must all be engaged early on. With proper policy oversight, I believe it can transform the future of the underserved,” said Operation HOPE Chairman, Founder, and CEO John Hope Bryant. “The purpose of this discussion is to discover new ways to leverage AI to win in key areas of economic opportunity such as education, housing, employment, and credit. If it can revolutionize business, it can do the same for our communities.”
“Getting this right by figuring out the new society that we want to build and how we want to integrate AI technology is one of the most important questions of our time,” Altman said. “I’m excited to have this discussion with a diverse group of people so that we can build something that humanity as a whole wants and needs.”
Throughout the event, Altman and Bryant demystified AI and how modern digital technology is revolutionizing the way today’s businesses compete and operate. By putting AI and data at the center of their capabilities, companies are redefining how they create, capture, and share value—and are achieving impressive growth as a result. During the Q&A session, they also discussed how government agencies can address AI policies that will lead to more equitable outcomes.
Altman is an American entrepreneur, angel investor, co-founder of Hydrazine Capital, former president of Y Combinator, founder and former CEO of Loopt, and co-founder and CEO of OpenAI. He was also one of TIME Magazine’s 100 Most Influential People of 2023.
According to recent research by IBM, more than one in three businesses were using AI technology in 2022. The report also notes that the adoption rate is exponential, with 42% currently considering incorporating AI into their business processes. Other research suggests that although the public sector is lagging, an increasing number of government agencies are considering or starting to use AI to improve operational efficiencies and decision-making. (McKinsey, 2020)
About Operation HOPE, Inc.
Since 1992, Operation HOPE has been moving America from civil rights to “silver rights” with the mission of making free enterprise and capitalism work for everyone—disrupting poverty for millions of low and moderate-income youth and adults across the nation. In 2023, Operation HOPE was named to Fast Company’s World Changing Ideas Award for pursuing innovation for good. Through its community uplift model, HOPE Inside, which received the Innovator of the Year recognition by American Banker magazine, Operation HOPE has served more than 4 million individuals and directed more than $3.2 billion in economic activity into disenfranchised communities—turning check-cashing customers into banking customers, renters into homeowners, small business dreamers into small business owners, minimum wage workers into living wage consumers, and uncertain disaster victims into financially empowered disaster survivors. Operation HOPE recently received its eighth consecutive 4-star charity rating for fiscal management and commitment to transparency and accountability from the prestigious non-profit evaluator, Charity Navigator. For more information: OperationHOPE.org. Join the conversation on social media at @operationHOPE.
###
Media Contact:
Lalohni Campbell, 404-593-7145, la@persemediagroup.com
The post Operation HOPE and Clark Atlanta University Host ChatGPT Creator and OpenAI CEO Sam Altman to Discuss the Future of AI in the Black Community appeared first on John Hope Bryant.
Join my Whatsgroup N5,000/ Monthly: https://app.groupify.co/g/yb8Tqi7iiavV
Join my 3-day 'I CAN HELP YOU SELL REAL ESTATE': https://paystack.com/buy/i-can-make-you-sell-real-estate-pwrbdu
Get Bumpa with my code 'PaulFoh' https://linktr.ee/getbumpam
Heute u.A. mit diesen Themen:
- Infarm verkleinert sich, verlässt Europa
- Sam Altman: 100 Millionen US-Dollar für Worldcoin
- Augustus Intelligence: Millionenforderung an Amthor
- Deutsches Investorenbarometer steigt
- Möglicher Interessenskonflikt bei Startup-Staatssekretär
- Twitter-CEO Linda Yaccarino freut sich auf Aufgabe
- Schutzschirmverfahren zur Rettung von Sono Motors
- Samsung entwickelt KI-Chatbot
- Vice Media ist insolvent
- 3,5 Millionen Euro für Nia Health
📰 📢 Du möchtest die aktuellen News aus der Startup Welt nicht nur hören, sondern auch lesen? Dann geht es hier zu unserem Daily Newsletter.
MONDAY KETCHUP: Yaccarino's first tweet, Peloton's CEO raise, Dan Snyder sells, Sam Altman now wants your eyeballs, and NERD ALERT: the effect of high performers bailing
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Helping your Senator Prepare for the Upcoming Sam Altman Hearing, published by Tiago de Vassal on May 14, 2023 on LessWrong. Hello Everyone, Next Tuesday, the US Senate will hold a hearing on AI (). OpenAI CEO Sam Altman will testify before Congress for the first time. We believe this is a great opportunity to bring awareness to some of the risks posed by AI technologies, and build common knowledge among US policymakers on possible solutions. If you live in the United States, you can play an important role as a constituent. Please reach out to your senator who will be present at this hearing, and voice your concerns about AI safety. To help them navigate the debate, we propose a list of questions, initially drafted by Siméon to share with the senators present at the hearing (). We’ve made a webpage where you can find out if a senator from your state will be present at the hearing. This page also includes the list of questions you can send them, a structured script for both call and email, as well as the necessary contact information./#senate AI Safety Tour is an initiative dedicated to mitigating AI existential risks through the active engagement of experts and related public figures in the public debate. You can learn more about us here: aisafetytour.com. If you click the join button, you’ll get to a form where you can reach out to us. You can also email us at hello@aisafetytour.com. And of course we’ll read every single comment on this page. Thanks, Tiago & Siméon AI Safety Tour Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Hello Everyone,
Next Tuesday, the US Senate will hold a hearing on AI (). OpenAI CEO Sam Altman will testify before Congress for the first time.
We believe this is a great opportunity to bring awareness to some of the risks posed by AI technologies, and build common knowledge among US policymakers on possible solutions.
If you live in the United States, you can play an important role as a constituent. Please reach out to your senator who will be present at this hearing, and voice your concerns about AI safety. To help them navigate the debate, we propose a list of questions, initially drafted by Siméon to share with the senators present at the hearing ().
We’ve made a webpage where you can find out if a senator from your state will be present at the hearing. This page also includes the list of questions you can send them, a structured script for both call and email, as well as the necessary contact information./#senate
AI Safety Tour is an initiative dedicated to mitigating AI existential risks through the active engagement of experts and related public figures in the public debate. You can learn more about us here: aisafetytour.com. If you click the join button, you’ll get to a form where you can reach out to us. You can also email us at hello@aisafetytour.com. And of course we’ll read every single comment on this page.
Thanks,
Tiago & Siméon
AI Safety Tour
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Dan and Erik discuss how Elon got coded right-wing, why liberals are more depressed than conservatives, crime and prisons.
LINKS:
Patrick Collison interviews Sam Altman: https://www.youtube.com/watch?v=1egAKCKPKCk
RECOMMENDED PODCAST:
Upstream with Erik Torenberg: https://link.chtbl.com/Upstream
TIMESTAMPS:
(00:00) Episode preview
(01:48) Twitter’s new CEO and Elon
(8:00) When did Elon become right-wing coded?
(14:00) Contrasting Zuck and Elon
(15:28) Sponsors: Secureframe | MarketerHire
(17:26) Contrasting Elon and Sam Altman
(22:00) Elon’s quest to increase “net truth”
(28:00) AI race / live players
(30:45) Accountability within private vs public sector
(33:00) Facebook bull case
(39:30) Jonathan Haidt’s anti-facebook case
(44:00) Why depression is on the rise
(50:00) Why liberals are more depressed than conservatives
(58:00) Living in America is higher variance
(1:05:00) Different approaches to curbing crime
TWITTER:
@MOZ_Podcast
@eriktorenberg (Erik)
@dwr (Dan)
More shownotes released in our Substack: https://momentofzen.substack.com/
Please support our sponsors: Secureframe | MarketerHire
- Secureframe: https://secureframe.com/
Secureframe is the leading all-in-one platform for security and privacy compliance. Get SOC-2 audit ready in weeks, not months. I believe in Secureframe so much that I invested in it, and I recommend it to all my portfolio companies. Sign up for a free demo and mention MOZ during your demo to get 20% off your first year. Secureframe has just released Secureframe Trust, a new product that lets you showcase your organization's security posture to build customer trust.
- MarketerHire: https://marketerhire.com/moz
MarketerHire is one of my favorite resources for growing startups looking to hire marketers. With 1000s of pre-vetted marketers across a dozen roles, whether you need help with growth, marketing, SEO, lifecycle, content, or any other aspect of growth marketing strategy. Over 5,000 companies already use MarketerHire to hire expert marketers on demand, ranging from top venture-backed startups to the most well-known Fortune 500s.
Go to marketerhire.com/moz and use code MOZ to get your $1,000 credit for your first hire.
Darntons | Investing In Innovation
Follow us for all the latest content on innovation within public markets.
Darntons Exclusive (Patreon): https://www.patreon.com/Darntons
FREE 2023 Investing Outlook: Sign-Up | Darntons Media
NEWSLETTER: Darntons | Newsletter | Focused On Innovation
WEBSITE: https://darntons.com/
主要内容
You get rich by owning thingsBuild a network
Be internally driven
文章地址:https://blog.samaltman.com/how-to-be-successful
这里可以找到我
约我唠嗑:https://calendly.com/xiaoshuaifm/15min
发房产笔记的newsletter:
https://littleshuai.substack.com/p/25-
唠叨房产小红书叫【Little Shuai- 佐治亚小帅】
Little Shuai-佐治亚小帅:
https://www.xiaohongshu.com/user/profile/5e12b11d000000000100962a
推特
https://twitter.com/xiaoshuaifm
关于本频道
Buy - Buy asset
Borrow - Use other people's money
Die - Hold long
本频道是专注搞钱的播客,
尤其关注房产投资,
是记录小白的投资之路。
重要提醒
本频道的所有讨论基于美国的情况。并不适用于其他的地区。
投资有风险。
本节目是个人思考的记录,并非任何投资理财建议。
请大家独立研究,独立判断,独立决策。
对节目有任何反馈,欢迎给我发邮件:
Hosted on Acast. See acast.com/privacy for more information.
主要内容
Be hard to compete with
Build a network
文章地址:https://blog.samaltman.com/how-to-be-successful
这里可以找到我
约我唠嗑:https://calendly.com/xiaoshuaifm/15min
发房产笔记的newsletter:
https://littleshuai.substack.com/p/25-
唠叨房产小红书叫【Little Shuai- 佐治亚小帅】
Little Shuai-佐治亚小帅:
https://www.xiaohongshu.com/user/profile/5e12b11d000000000100962a
推特
https://twitter.com/xiaoshuaifm
关于本频道
Buy - Buy asset
Borrow - Use other people's money
Die - Hold long
本频道是专注搞钱的播客,
尤其关注房产投资,
是记录小白的投资之路。
重要提醒
本频道的所有讨论基于美国的情况。并不适用于其他的地区。
投资有风险。
本节目是个人思考的记录,并非任何投资理财建议。
请大家独立研究,独立判断,独立决策。
对节目有任何反馈,欢迎给我发邮件:
Hosted on Acast. See acast.com/privacy for more information.
主要内容
Be bold & Be willful
文章地址:https://blog.samaltman.com/how-to-be-successful
这里可以找到我
约我唠嗑:https://calendly.com/xiaoshuaifm/15min
发房产笔记的newsletter:
https://littleshuai.substack.com/p/25-
唠叨房产小红书叫【Little Shuai- 佐治亚小帅】
Little Shuai-佐治亚小帅:
https://www.xiaohongshu.com/user/profile/5e12b11d000000000100962a
推特
https://twitter.com/xiaoshuaifm
关于本频道
Buy - Buy asset
Borrow - Use other people's money
Die - Hold long
本频道是专注搞钱的播客,
尤其关注房产投资,
是记录小白的投资之路。
重要提醒
本频道的所有讨论基于美国的情况。并不适用于其他的地区。
投资有风险。
本节目是个人思考的记录,并非任何投资理财建议。
请大家独立研究,独立判断,独立决策。
对节目有任何反馈,欢迎给我发邮件:
Hosted on Acast. See acast.com/privacy for more information.
主要内容
Focus & Work hard
文章地址:https://blog.samaltman.com/how-to-be-successful
这里可以找到我
约我唠嗑:https://calendly.com/xiaoshuaifm/15min
发房产笔记的newsletter:
https://littleshuai.substack.com/p/25-
唠叨房产小红书叫【Little Shuai- 佐治亚小帅】
Little Shuai-佐治亚小帅:
https://www.xiaohongshu.com/user/profile/5e12b11d000000000100962a
推特
https://twitter.com/xiaoshuaifm
关于本频道
Buy - Buy asset
Borrow - Use other people's money
Die - Hold long
本频道是专注搞钱的播客,
尤其关注房产投资,
是记录小白的投资之路。
重要提醒
本频道的所有讨论基于美国的情况。并不适用于其他的地区。
投资有风险。
本节目是个人思考的记录,并非任何投资理财建议。
请大家独立研究,独立判断,独立决策。
对节目有任何反馈,欢迎给我发邮件:
Hosted on Acast. See acast.com/privacy for more information.
Men hans skapelse Open AI tände gnistan till dagens AI-explosion. Och medan vissa ser den som en frigörelse, ser andra AI-kapprustningen som ett hot mot hela mänskligheten.
Programledare: Evelyn Jones. Med Linus Larsson, DN:s techredaktör.
Producent: Marcus Morey-Halldin.
主要内容
Make it easy to take risks
文章地址:https://blog.samaltman.com/how-to-be-successful
这里可以找到我
约我唠嗑:https://calendly.com/xiaoshuaifm/15min
发房产笔记的newsletter:
https://littleshuai.substack.com/p/25-
唠叨房产小红书叫【Little Shuai- 佐治亚小帅】
Little Shuai-佐治亚小帅:
https://www.xiaohongshu.com/user/profile/5e12b11d000000000100962a
推特
https://twitter.com/xiaoshuaifm
关于本频道
Buy - Buy asset
Borrow - Use other people's money
Die - Hold long
本频道是专注搞钱的播客,
尤其关注房产投资,
是记录小白的投资之路。
重要提醒
本频道的所有讨论基于美国的情况。并不适用于其他的地区。
投资有风险。
本节目是个人思考的记录,并非任何投资理财建议。
请大家独立研究,独立判断,独立决策。
对节目有任何反馈,欢迎给我发邮件:
Hosted on Acast. See acast.com/privacy for more information.
主要内容
Get good at “sales”
文章地址:https://blog.samaltman.com/how-to-be-successful
这里可以找到我
发房产笔记的newsletter:
https://littleshuai.substack.com/p/25-
唠叨房产小红书叫【Little Shuai- 佐治亚小帅】
Little Shuai-佐治亚小帅:
https://www.xiaohongshu.com/user/profile/5e12b11d000000000100962a
推特
https://twitter.com/xiaoshuaifm
关于本频道
Buy - Buy asset
Borrow - Use other people's money
Die - Hold long
本频道是专注搞钱的播客,
尤其关注房产投资,
是记录小白的投资之路。
重要提醒
本频道的所有讨论基于美国的情况。并不适用于其他的地区。
投资有风险。
本节目是个人思考的记录,并非任何投资理财建议。
请大家独立研究,独立判断,独立决策。
对节目有任何反馈,欢迎给我发邮件:
Hosted on Acast. See acast.com/privacy for more information.
This podcast is a commentary and does not contain any copyrighted material of the reference source.
We strongly recommend accessing/buying the reference source at the same time.
■Reference Source
■Post on this topic (You can get FREE learning materials!)
■Youtube Video
- https://youtu.be/OCZCHUvfAls (All Words)
- https://youtu.be/757rp9Ve__0 (Advanced Words)
- https://youtu.be/N7CPwINaVWQ (Quick Look)
■Top Page for Further Materials
■SNS (Please follow!)
🐦 Twitter ➡︎ https://twitter.com/EnglistMe
📷 Instagram ➡︎ https://instagram.com/englist.me/
🎵 TikTok ➡︎ https://www.tiktok.com/@englist.me
🅿️ Pinterest ➡︎ https://tiktok.com/@englist.me
--------------
About Us
--------------
We are strongly committed to building your vocabulary by publishing comprehensive learning materials.
Let's get a wide variety of learning materials from the following site!
You will gain a deeper understanding of vocabulary through Spelling and Fill in the Blank exercises.
#TOEFL, #IELTS, #CambridgeEnglish, #GRE, #SAT, #GMAT, #academic, #ted, #tedtalk, #teded, #english, #vocabulary, #wordlist, #englishbook, #study, #englishstudy, #vocabularywords
This podcast is a commentary and does not contain any copyrighted material of the reference source.
We strongly recommend accessing/buying the reference source at the same time.
■Reference Source
■Post on this topic (You can get FREE learning materials!)
■Youtube Video
- https://youtu.be/OCZCHUvfAls (All Words)
- https://youtu.be/757rp9Ve__0 (Advanced Words)
- https://youtu.be/N7CPwINaVWQ (Quick Look)
■Top Page for Further Materials
■SNS (Please follow!)
🐦 Twitter ➡︎ https://twitter.com/EnglistMe
📷 Instagram ➡︎ https://instagram.com/englist.me/
🎵 TikTok ➡︎ https://www.tiktok.com/@englist.me
🅿️ Pinterest ➡︎ https://tiktok.com/@englist.me
--------------
About Us
--------------
We are strongly committed to building your vocabulary by publishing comprehensive learning materials.
Let's get a wide variety of learning materials from the following site!
You will gain a deeper understanding of vocabulary through Spelling and Fill in the Blank exercises.
#TOEFL, #IELTS, #CambridgeEnglish, #GRE, #SAT, #GMAT, #academic, #ted, #tedtalk, #teded, #english, #vocabulary, #wordlist, #englishbook, #study, #englishstudy, #vocabularywords
In this episode, I recap some of my top takeaways from listening to Sam Altman talk about AI, ChatGPT, and the future of technology at the Stripe Sessions event in San Francisco.
For more on AI:
- Read my guide: AI for Hotels: A Guide to Artificial Intelligence for Hospitality Leaders
- Watch me and Chris Cano discuss the implications for hospitality providers
Join the conversation on today's episode on the Hospitality Daily LinkedIn page.
Hospitality Daily is brought to you by MDO and their Night Auditor Appreciation Day on May 18th. Learn more and get a free Starbucks gift card to give to the person on your team doing this important work at HotelHeros.org
Hospitality Daily isn't just a podcast! Every morning - Tuesday through Friday - I summarize the stories you need to know as a hospitality professional in a short email. Read today's issue and subscribe here.
Run a hotel? Then check out my website HotelOperations.com. Every Sunday evening, I send out an email with the latest stories and case studies for operators. Subscribe at hoteloperations.com/newsletter
Artificial intelligence (AI) is dominating the headlines, but it’s not a new topic here on Exponential View. This week and next, Azeem Azhar shares his favorite conversations with AI pioneers. Their work and insights are more relevant than ever.
OpenAI has stunned the world with the release of its language-generating AI, ChatGPT-4. In 2020, OpenAI CEO Sam Altman joined Azeem Azhar to reflect on the huge attention generated by the precursor to GPT-4 and what that could mean for the future research and development toward the creation of artificial general intelligence (AGI).
They also explored:
- How AGI could be used both to reduce and exacerbate inequality
- How governance models need to change to address the growing power of technology companies
- How Altman’s experience leading YCombinator informed his leadership of OpenAI
Further reading:
- “The messy, secretive reality behind OpenAI’s bid to save the world“ (Wired, 2020)
- “Sam Altman’s Manifest Destiny“ (New Yorker, 2016)
- “Governance in the Age of AI“ (Exponential View Podcast, 2019)
主要内容
Learn to think independently
文章地址:https://blog.samaltman.com/how-to-be-successful
这里可以找到我
发房产笔记的newsletter:
https://littleshuai.substack.com/p/25-
唠叨房产小红书叫【Little Shuai- 佐治亚小帅】
Little Shuai-佐治亚小帅:
https://www.xiaohongshu.com/user/profile/5e12b11d000000000100962a
推特
https://twitter.com/xiaoshuaifm
关于本频道
Buy - Buy asset
Borrow - Use other people's money
Die - Hold long
本频道是专注搞钱的播客,
尤其关注房产投资,
是记录小白的投资之路。
重要提醒
本频道的所有讨论基于美国的情况。并不适用于其他的地区。
投资有风险。
本节目是个人思考的记录,并非任何投资理财建议。
请大家独立研究,独立判断,独立决策。
对节目有任何反馈,欢迎给我发邮件:
Hosted on Acast. See acast.com/privacy for more information.
How might we develop and deploy beneficial, safe artificial general intelligence for humanity? Reid and Aria are joined by Sam Altman, the CEO of OpenAI, and Greg Brockman, OpenAI co-founder and president. Sam and Greg trace their journey—from articulating their mission to early company projects and decisions to scaling and sharing GPT-4 with the world. They also explore the transformative impact artificial intelligence can have on other industries, like energy, medicine, education, and law. Plus, GPT-4 offers a poetic perspective on a piece of code.
Read the transcript of this episode here.
Read OpenAI’s paper on the Unsupervised Sentiment Neuron here.
Here's the code used to generate Greg's AI poem: https://github.com/openai/openai-python/blob/main/openai/api_requestor.py.
Read Impromptu by Reid Hoffman with GPT-4 here.
For more info on the podcast and transcripts of all of the episodes, visit www.possible.fm/podcast.
Topics
4:00 - Hellos and intros
4:30 - The OpenAI mission
8:45 - Advancements in education and medicine
12:14 - Surprises with scale
15:19 - Building GPT-4
18:34 - Regulating AI
25:50 - How OpenAI got where it is today
28:26 - First scaling success with DOTA
32:51 - Which industries will AI transform?
39:30 - Sam and Greg’s investments outside AI
45:08 - Surprising applications of AI
49:40 - Rapidfire questions
56:16 - Debrief with Reid and Aria
Possible is a new podcast that sketches out the brightest version of the future—and what it will take to get there. Most of all, it asks: what if, in the future, everything breaks humanity's way? Hosted by Reid Hoffman and Aria Finger, each episode features an interview with a visionary from a different field: climate science, media, criminal justice, and more. The conversation also features another kind of guest: GPT-4, OpenAI’s latest and most powerful language model to date. Each episode has a companion story, generated by GPT-4, which will serve as a jumping-off point for a hopeful, speculative discussion about what humanity could possibly get right if we leverage technology—and our collective effort—effectively.
Possible is produced by Wonder Media Network and hosted by Reid Hoffman and Aria Finger. Our showrunner is Shaun Young. Possible is produced by Edie Allard and Sara Schleede. Jenny Kaplan is our Executive Producer and Editor. Special thanks to Theresa Lopez, Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, and Ben Relles.
主要内容
Have almost too much self-belief
文章地址:https://blog.samaltman.com/how-to-be-successful
这里可以找到我
发房产笔记的newsletter:
https://littleshuai.substack.com/p/25-
唠叨房产小红书叫【Little Shuai- 佐治亚小帅】
Little Shuai-佐治亚小帅:
https://www.xiaohongshu.com/user/profile/5e12b11d000000000100962a
推特
https://twitter.com/xiaoshuaifm
关于本频道
Buy - Buy asset
Borrow - Use other people's money
Die - Hold long
本频道是专注搞钱的播客,
尤其关注房产投资,
是记录小白的投资之路。
重要提醒
本频道的所有讨论基于美国的情况。并不适用于其他的地区。
投资有风险。
本节目是个人思考的记录,并非任何投资理财建议。
请大家独立研究,独立判断,独立决策。
对节目有任何反馈,欢迎给我发邮件:
Hosted on Acast. See acast.com/privacy for more information.
主要内容
Compound yourself
文章地址:https://blog.samaltman.com/how-to-be-successful
这里可以找到我
发房产笔记的newsletter:
https://littleshuai.substack.com/p/25-
唠叨房产小红书叫【Little Shuai- 佐治亚小帅】
Little Shuai-佐治亚小帅:
https://www.xiaohongshu.com/user/profile/5e12b11d000000000100962a
推特
https://twitter.com/xiaoshuaifm
关于本频道
Buy - Buy asset
Borrow - Use other people's money
Die - Hold long
本频道是专注搞钱的播客,
尤其关注房产投资,
是记录小白的投资之路。
重要提醒
本频道的所有讨论基于美国的情况。并不适用于其他的地区。
投资有风险。
本节目是个人思考的记录,并非任何投资理财建议。
请大家独立研究,独立判断,独立决策。
对节目有任何反馈,欢迎给我发邮件:
Hosted on Acast. See acast.com/privacy for more information.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence Parts 1 & 2, published by Olivia Jimenez on April 28, 2023 on LessWrong. I'm often surprised more people haven't read Open AI CEO Sam Altman's 2015 blog posts Machine Intelligence Part 1 & Part 2. In my opinion, they contain some of the most strong, direct, and clear articulations of why AGI is dangerous from a person at an AGI company. (Note that the posts were published before OpenAI was founded. There's a helpful wiki of OpenAI history here.) Hence: a linkpost. I've copied both posts directly below for convenience. I've also bolded a few of the lines I found especially noteworthy. Machine intelligence, part 1 This is going to be a two-part post—one on why machine intelligence is something we should be afraid of, and one on what we should do about it. If you’re already afraid of machine intelligence, you can skip this one and read the second post tomorrow—I was planning to only write part 2, but when I asked a few people to read drafts it became clear I needed part 1. WHY YOU SHOULD FEAR MACHINE INTELLIGENCE Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could. Also, most of these other big threats are already widely feared. It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away. But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point. SMI does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans. (Incidentally, Nick Bostrom’s excellent book “Superintelligence” is the best thing I’ve seen on this topic. It is well worth a read.) Most machine intelligence development involves a “fitness function”—something the program tries to optimize. At some point, someone will probably try to give a program the fitness function of “survive and reproduce”. Even if not, it will likely be a useful subgoal of many other fitness functions. It worked well for biological life. Unfortunately for us, one thing I learned when I was a student in the Stanford AI lab is that programs often achieve their fitness function in unpredicted ways. Evolution will continue forward, and if humans are no longer the most-fit species, we may go away. In some sense, this is the system working as designed. But as a human programmed to survive and reproduce, I feel we should fight it. How can we survive the development of SMI? It may not be possible. One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to makes itself undetectable. It’s very hard to know how close we are to machine intelligence surpassing human intelligence. Progression of machine intelligence is a double exponential function; human-written programs and computing power are getting better at an exponential rate, and self-learning/self-improving software will improve i...
I'm often surprised more people haven't read Open AI CEO Sam Altman's 2015 blog posts Machine Intelligence Part 1 & Part 2. In my opinion, they contain some of the most strong, direct, and clear articulations of why AGI is dangerous from a person at an AGI company.
(Note that the posts were published before OpenAI was founded. There's a helpful wiki of OpenAI history here.)
Hence: a linkpost. I've copied both posts directly below for convenience. I've also bolded a few of the lines I found especially noteworthy.
Machine intelligence, part 1
This is going to be a two-part post—one on why machine intelligence is something we should be afraid of, and one on what we should do about it. If you’re already afraid of machine intelligence, you can skip this one and read the second post tomorrow—I was planning to only write part 2, but when I asked a few people to read drafts it became clear I needed part 1.
WHY YOU SHOULD FEAR MACHINE INTELLIGENCE
Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could. Also, most of these other big threats are already widely feared.
It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away. But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point.
SMI does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans.
(Incidentally, Nick Bostrom’s excellent book “Superintelligence” is the best thing I’ve seen on this topic. It is well worth a read.)
Most machine intelligence development involves a “fitness function”—something the program tries to optimize. At some point, someone will probably try to give a program the fitness function of “survive and reproduce”. Even if not, it will likely be a useful subgoal of many other fitness functions. It worked well for biological life. Unfortunately for us, one thing I learned when I was a student in the Stanford AI lab is that programs often achieve their fitness function in unpredicted ways.
Evolution will continue forward, and if humans are no longer the most-fit species, we may go away. In some sense, this is the system working as designed. But as a human programmed to survive and reproduce, I feel we should fight it.
How can we survive the development of SMI? It may not be possible. One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to makes itself undetectable.
It’s very hard to know how close we are to machine intelligence surpassing human intelligence. Progression of machine intelligence is a double exponential function; human-written programs and computing power are getting better at an exponential rate, and self-learning/self-improving software will improve i...
Este episodio tiene noticias internas y externas muy interesantes. Por un lado tenemos inauguración de canal de YouTube y por otro el de la página web. En YouTube, podrás encontrar todos los episodios del pódcast y veremos si más adelante puedo ampliar con más cositas... En la web, también están todos los episodios y puedes conocerme un poco mejor. En cuanto a las noticias externas, hay varios frentes abiertos, por un lado, la proximidad de la Google I/O, lo que viene a ser la conferencia anual de desarrolladores de Google y en junio la WWDC de Apple, dos eventos que a medida que se aproximan, van generando rumores, novedades, noticias... Y también hay noticias de otras marcas, interesantes de mencionar. Terminamos con la minibiografía de Sam Altman, el actual CEO de OpenAI.
Para cualquier duda o consulta ya sabéis que podéis escribirme en: hola@raulenred.com o a través de Mastodon.
Música de Coma-Media, Gvidon y Marat Mukhamadievde Pixabay
--- Send in a voice message: https://podcasters.spotify.com/pod/show/raulmacias/messageSam Altman, the founder of OpenAI expects his company to capture as much as $100 Trillion of the world's wealth.
He proposes a plan that will help redistribute that wealth and make sure that everyone is able to participate.
Now, it’s important to understand that five years ago, many AI researchers mocked this man for his ideas about AGI and how fast progress will increase
No, one is mocking him now.
In fact, many people are asking him to pause AI research for 6 months, so we can put safety measures in place.
So, keep this in mind as you hear his predictions, this person tends to see where the AI puck is going, much better than most.
In Sam Altman’s blog he argues that AI advancement will replace the need for human labor.
As the cost of human labor falls towards zero, we need to set up a policy that will allow us to distribute resources and improve the standard of living for everyone.
At the same time Goldman Sachs releases a report talking about the massive potential that AI will have on the economy globally, including what percentage of the workforce is expected to be replaced by automated AI.
A very recent study from MIT, shows some surprising findings as white collar workers are asked to use tools like ChatGPT to help with their work.
Using AI tools seems to improve human productivity, but also reduce the productivity inequality in workers.
That is, people using these AI tools tend to produce less substandard work, have less errors and also tend to “give up” less on their assigned tasks.
In this video we look at these key studies that seem to indicate how powerful these AI tools will be.
Bienvenidos a un nuevo episodio, una nuevoINSIDE.X donde hablamos de
🟥 EL CAMINO A UNA AGI. Análisis de la entrevista de Lex Fridman a Sam Altman CEO OpenAI
Analizaremos la entrevista de Lex Fridman a Sam Altman así como toda la actualidad alrededor de ChatGPT, OpenAI y el camino en el desarrollo... próximo o no de una AGI.
Un programa apasionante y épico.
Presenta y dirige:
Plácido Doménech Espí | Creador de xHUB.AI | CEO MXND
🍿A disfrutar!
#inteligenciaartificial #agi #openai #samaltman #chatgpt
本週三位主持人來聊:
- 台灣書市 2012-2015 突然大跌
- 到底甚麼算是中國用語?
- 郭台銘揪 Sam Altman 來台
喜歡我們的 Podcast 塞掐 Side Chat 嗎?有什麼想法歡迎來信、來訊跟我們說!也可以透過 INSIDE 硬塞的 Facebook、Instagram、YouTube 留言給我們。
See omnystudio.com/listener for privacy information.
Disclaimer: This video depicts a fictional podcast between Joe Rogan and Sam Altman, with all content generated using AI language models. The ideas and opinions expressed in the podcast are not reflective of the thoughts of Joe Rogan or Sam Altman. The content portrayed in this video is purely for entertainment purposes and should not be taken as a representation of the actual beliefs or attitudes of the individuals portrayed. The use of AI technology to generate this content is solely intended as an exploration of the capabilities of language models and should not be misconstrued as a genuine conversation between the individuals depicted. Any resemblance to actual events, individuals, or entities is purely coincidental. Viewers are encouraged to approach this content with a critical and discerning eye and to understand that the views expressed in this video are not intended to reflect those of the individuals portrayed or of any affiliated organizations or entities.
#JoeRoganAI #SamAltmanAI #AIgenerated #FictionalPodcast #LanguageModels #ArtificialIntelligence #AItechnology #DeepLearning #MachineLearning #GenerativeModels #ElonMusk #NaturalLanguageProcessing #VirtualConversation #ConversationalAgents #AIChatbot #VoiceTechnology #SyntheticSpeech #FuturisticEntertainment #SciFiPodcast #SpeculativeFiction #ExperimentalContent #FakePodcast #OpenAI #DallE #MidJourney #ChatGPT #GPT
In Episode 7 of Life After AI, Alec and Asher discuss OpenAI's release of ChatGPT and the implications, the open letter to halt all AI development for 6 months, Sam Altman's recent interview with Lex Fridman, AI alignment scenarios, and more.
Chapters: Intro: 0:00 - 0:43 ChatGPT Plugins: 0:46 - 18:34 Open Letter to Halt AI: 18:40 - 25:55 Elon Musk & Sam Altman/OpenAI: 25:58 - 30:49 OpenAI Switching to For-Profit: 30:49
Follow Us on Twitter: https://twitter.com/LifeAfterAI
Subscribe on YouTube: https://youtube.com/@lifeafterai
In this episode of Partnering Leadership, Mahan Tavakoli reflects on the impact of the accelerating speed of change due to artificial intelligence hitting the inflection point of its exponential curve. To discuss the latest in AI and how the developments will impact organizations, Mahan speaks with Tom Tulley, author of Artificial Intelligence Basics, who is also an investor and advisor to AI companies. First, Mahan and Tom discuss the significance of the release of GPT-4, its uses, and the importance of GPT-4 plugins. They also talk about Microsoft Bing, Microsoft's Co-Pilot, Google's Bard, and the impact of platforms incorporating AI technology into their office tools. Next, Tom and Mahan talked about how these tools will impact knowledge work and some of the concerns regarding the potential disruptive impact of AI. Then the discussion turns to how organizations and teams can approach experimentation with AI tools to take advantage of the opportunities and stay ahead of the competition. Finally, Tom Taulli shares what leaders must consider when implementing AI technology in their organizations.
Some Highlights:
- How the rapid advancement in artificial intelligence is transforming the world
- The difference between GPT-3.5 and GPT-4 and the potential uses for GPT-4
- Microsoft Co-pilot and examples of how to use generative AI in teams and organizations
- Tom Taulli on using AI-based technologies to automate repetitive tasks and the use of chatbots in organizations
- Potential applications of AI in knowledge management
- OpenAI CEO Sam Altman's concerns about AI and the need for conversation around AI's future applications
- What generative AI doing well in various tests means for various professions
- Generative AI's impact on jobs, including coding
- AI applications in the workplace
Additional Partnering Leadership episodes on Artificial Intelligence:
Tom Taulli: AI Bootcamp for Leaders
Dan Turchin: AI & The Future of Work
Mahan Tavakoli: AI & The Augmented Future of Work
Louis Rosenberg: AI, Augmented and Virtual Reality
Emily Yu: AI Technology to Support Social Changemakers
Connect with Tom Taulli:
Connect with Mahan Tavakoli:
We talk about OpenAi's approach to artificial intelligence and go into chat GPT.
Here is the link for the original show
https://open.spotify.com/episode/6rAOusZcsuNtCv8mefmwND?si=3b90c8ced26f4436
https://youtu.be/L_Guz73e6fw
Other interview with Sam mentioned in show
Be sure to email us and let you know what you think.
talkinheadspod@gmail.com
Tenemos como invitado especial a Pep Viladomat, socio de Mckinsey, quien nos contará de primera mano su experiencia en la consultora más prestigiosa del mundo. Además, Pep nos lleva a través de la historia de Mckinsey y nos explica cómo se ha convertido en el gigante que es hoy.
Hacemos también un repaso de la actualidad, comentando la entrevista de Lex Fridman a Sam Altman y debatimos sobre la comparecencia de TikTok en el congreso americano.
Esto y más en la Tertulia de Itnig…
Síguenos en Twitter:
• Bernat Farrero: @bernatfarrero
• Jordi Romero: @jordiromero
• César Migueláñez: @heycesr
EVENTOS
Pitch to Investors (Todos los jueves 19h) - https://itnig.net/events/
Itnig Talks - https://youtube.com/playlist?list=PLs...
SOBRE ITNIG
Twitter - https://twitter.com/itnig
LinkedIn - https://es.linkedin.com/company/itnig
Instagram - https://www.instagram.com/itnig/
Newsletter - https://itnig.net/newsletter/
Web - https://itnig.net/
ESCUCHA NUESTRO PODCAST EN
Spotify: http://bit.ly/itnigspotify
️ Apple Podcast: http://bit.ly/itnigapple
El rockstar del momento, Sam Altman -CEO de OpenAI-, estuvo con Lex Fridman y Santi Siri y Mauro Ordoñez analizan sus declaraciones más importantes. ¿Cuál es la estrategia de ChatGPT? ¿Qué planes tienen para el futuro? Además, hablamos del polémico arresto de Do Kwon, ex CEO de Terra Luna, y sus implicancias para el mundo cripto. En la geopolítica mundial, Xi Jinping y Putin tienen una reunión más que amistosa en la que celebran “un cambio que nadie ha visto en 100 años”. ¿Hay amenaza para el dólar?
También, Santi y Mauro conversan sobre la tibia exposición del CEO de Tik Tok en el Congreso de Estados Unidos. ¿China avanza hacia occidente?
De esto y mucho más hablamos en La Última Frontera.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What can we learn from Lex Fridman’s interview with Sam Altman?, published by Karl von Wendt on March 27, 2023 on LessWrong. These are my personal thougths about this interview. Epistemic status: I neither consider myself a machine-learning expert, nor am I an alignment expert. My focus is on outreach: explaining AI safety to the general public and professionals outside of the AI safety community. So an interview like this one is important material for me to both understand the situation myself and explain it to others. After watching it, I’m somewhat confused. There were bits in this talk that I liked and others that disturbed me. There seems to be a mix of humbleness and hubris, of openly acknowledging AI risks and downplaying some elements of them. I am unsure how open and honest Sam Altman really was. I don’t mean to criticize. I want to understand what OpenAI’s and Sam Altman’s stance towards AI safety really is. Below I list transcriptions of the parts that seemed most relevant for AI safety and my thoughts/questions about them. Maybe you can help me better understand this by commenting. [23:55] Altman: “Our degree of alignment increases faster than our rate of capability progress, and I think that will become more and more important over time.” I don’t really understand what this is supposed to mean. What’s a “degree of alignment”? How can you meaningfully compare it with “rate of capability progress”? To me, this sounds a lot like marketing: “We know we are dealing with dangerous stuff, so we are extra careful.” Then again, it’s probably hard to explain this in concrete terms in an interview. [24:40] Altman: “I do not think we have yet discovered a way to align a super powerful system. We have something that works for our current scale: RLHF.” I find this very open and honest. Obviously, he not only knows about the alignment problem, but openly admits that RLHF is not the solution to aligning an AGI. Good! [25:10] Altman: “It’s easy to talk about alignment and capability as of orthogonal vectors, they’re very close: better alignment techniques lead to better capabilities, and vice versa. There are cases that are different, important cases, but on the whole I think things that you could say like RLHF or interpretability that sound like alignment issues also help you make much more capable models and the division is just much fuzzier than people think.” This, I think, contains two messages: “Capabilities research and alignment research are intertwined” and “criticizing us for advancing capabilities so much is misguided, because we need to do that in order to align AI”. I understand the first one, but I don’t subscribe to the second one, see discussion below. [47:53] Fridman: “Do you think it’s possible that LLMs really is the way we build AGI?”Altman: “I think it’s part of the way. I think we need other super important things . For me, a system that cannot significantly add to the sum total of scientific knowledge we have access to – kind of discover, invent, whatever you want to call it – new, fundamental science, is not a superintelligence. . To do that really well, I think we need to expand on the GPT paradigm in pretty important ways that we’re still missing ideas for. I don’t know what those ideas are. We’re trying to find them.” This is pretty vague, which is understandable. However, it seems to indicate to me that the current, relatively safe, mostly myopic GPT approach will be augmented with elements that may make their approach much more dangerous, like maybe long term memory and dynamic learning. This is highly speculative, of course. [49:50] Altman: “The thing that I’m so excited about is not that it’s a system that kind of goes off and does its own thing but that it’s this tool that humans are using in this feedback loop . I’m excited about a world ...
These are my personal thougths about this interview.
Epistemic status: I neither consider myself a machine-learning expert, nor am I an alignment expert. My focus is on outreach: explaining AI safety to the general public and professionals outside of the AI safety community. So an interview like this one is important material for me to both understand the situation myself and explain it to others. After watching it, I’m somewhat confused. There were bits in this talk that I liked and others that disturbed me. There seems to be a mix of humbleness and hubris, of openly acknowledging AI risks and downplaying some elements of them. I am unsure how open and honest Sam Altman really was. I don’t mean to criticize. I want to understand what OpenAI’s and Sam Altman’s stance towards AI safety really is.
Below I list transcriptions of the parts that seemed most relevant for AI safety and my thoughts/questions about them. Maybe you can help me better understand this by commenting.
[23:55] Altman: “Our degree of alignment increases faster than our rate of capability progress, and I think that will become more and more important over time.”
I don’t really understand what this is supposed to mean. What’s a “degree of alignment”? How can you meaningfully compare it with “rate of capability progress”? To me, this sounds a lot like marketing: “We know we are dealing with dangerous stuff, so we are extra careful.” Then again, it’s probably hard to explain this in concrete terms in an interview.
[24:40] Altman: “I do not think we have yet discovered a way to align a super powerful system. We have something that works for our current scale: RLHF.”
I find this very open and honest. Obviously, he not only knows about the alignment problem, but openly admits that RLHF is not the solution to aligning an AGI. Good!
[25:10] Altman: “It’s easy to talk about alignment and capability as of orthogonal vectors, they’re very close: better alignment techniques lead to better capabilities, and vice versa. There are cases that are different, important cases, but on the whole I think things that you could say like RLHF or interpretability that sound like alignment issues also help you make much more capable models and the division is just much fuzzier than people think.”
This, I think, contains two messages: “Capabilities research and alignment research are intertwined” and “criticizing us for advancing capabilities so much is misguided, because we need to do that in order to align AI”. I understand the first one, but I don’t subscribe to the second one, see discussion below.
[47:53] Fridman: “Do you think it’s possible that LLMs really is the way we build AGI?”Altman: “I think it’s part of the way. I think we need other super important things . For me, a system that cannot significantly add to the sum total of scientific knowledge we have access to – kind of discover, invent, whatever you want to call it – new, fundamental science, is not a superintelligence. . To do that really well, I think we need to expand on the GPT paradigm in pretty important ways that we’re still missing ideas for. I don’t know what those ideas are. We’re trying to find them.”
This is pretty vague, which is understandable. However, it seems to indicate to me that the current, relatively safe, mostly myopic GPT approach will be augmented with elements that may make their approach much more dangerous, like maybe long term memory and dynamic learning. This is highly speculative, of course.
[49:50] Altman: “The thing that I’m so excited about is not that it’s a system that kind of goes off and does its own thing but that it’s this tool that humans are using in this feedback loop . I’m excited about a world ...
Bienvenidos a un nuevo episodio, un nuevo INSIDE.X donde hablamos de
🟥 ¿Por qué crear una AGI? Análisis de la entrevista de Sam Altman CEO de OpenAI en ABC News.
Vídeo entrevista:
https://youtu.be/540vzMlf-54
Presenta y dirige:
Plácido Doménech Espí | Creador de xHUB.AI | CEO MXND
🍿A disfrutar!
#agi #inteligenciaartificial