I had an incredible time at the #EmTechDigital hosted by MIT Technology Review. Great talks by leaders across Google, AWS, OpenAI, Meta, and others. A few takeaways: ✨ Get ready for generative experiences! UX will start to change in a big way. Instead of predefined content tailored to fixed personas, generative experiences will allow us to interact with applications using questions, gestures, and natural voice, all informed by a rich and personal understanding of your preferences and needs. ⚙️ Generative AI will force us to confront traditional workforce productivity metrics. Considering that 75% of organizations plan to revisit talent strategy in the next two years, metrics like time spent, lines of code and amount of activity won't cut it anymore - and worse, risk penalizing workers whose tasks will be impacted. Instead, outcome metrics—such as quality, error rate, satisfaction, and profit growth should define how workers are compensated. ⚖️ If we regulate it for humans, we should regulate it for AI. But the mechanics still need to be worked out. Doctors need to be licensed. But what about getting medical advice from an LLM? Is it the same as looking up information on WebMD? Do you need to be licensed to consume it or does the model need to be certified? And for what scope? These are some of the questions regulators will need to grapple with. 🚀 The future evolution of AI is still being written. How far we will go (and where) will be determined by frontier research in multimodal, multi-model coordination, next gen compute hardware, interpretability, and privacy tech. Outstanding show as always by the MIT crew + special shout out to Amy Nordrum who killed it on stage. Thanks to everyone who took some time to chat and exchange ideas! Can't wait to connect again soon.
Cal Al-Dhubaib’s Post
More Relevant Posts
-
UK friends, excited to share that I'll be back in London to speak at Open Data Science Conference (ODSC) Europe, September 5-6! Join me for "Bringing AI Strategy to Life" where I will share some best practices from working in heavily regulated environments like healthcare, energy, and finance. You will learn how to bridge the gap between technical teams and business stakeholders to develop AI solutions that are technically sound and widely trusted. Let me know if you'd like to catch up while I'm in town!
To view or add a comment, sign in
-
-
I first became aware of Dr. Joy's work in 2017 during the early days of building Pandata. At that time, I was recovering from my first failed venture, where we applied machine learning models to electronic medical records. Three key challenges emerged from that experience: 1. Data science talent was scarce outside tech hubs. 2. Businesses lacked broader awareness about unintended consequences of machine learning. 3. There was a growing need for strong communication skills to bridge the gap between data science and business to build more safe and fair solutions machine learning solutions. And so I turned to folks like Dr. Joy who excelled at making complex and emotionally difficult topics digestible. Her insights were invaluable as I navigated the intersection of AI ethics and practical application. After years of following her incredible journey, I was honored to meet one of my personal heroes at DataConnect Conference. If you haven't yet, add "Unmasking AI" to your summer reading list. It's a powerful story about how algorithms can go wrong and Dr. Joy's heroic efforts to drive change in the tech industry. And if you're looking for ways to help affect positive change? Consider supporting the The Algorithmic Justice League.
To view or add a comment, sign in
-
-
The last few days of DataConnect Conference have been nothing short of incredible! I'm humbled and honored to be part of the Women in Analytics (WIA) board that has helped put over 450 women leaders in data and AI on this stage. From deeply technical talks about bringing AI agents to life to practical discussions about the future of work to addressing global peace and biodiversity. We even got to meet some friendly pups. I'm especially grateful to call some of these incredible women my friends and to have so many of my Further colleagues along for the journey. As a special treat, my parents (in from Saudi Arabia) surprised me in Columbus for my birthday ❤️ Thank you to everyone who took the time to say hello! Whether we nerded out about data or life in general, it made my day. Can't wait to see you again next year!
To view or add a comment, sign in
-
-
I couldn't imagine any other way I'd rather celebrate my birthday than being here at DataConnect Conference! The day has been full of fun surprises: I got to meet the incredible Sol Rashidi in person after her inspiring key note and got a signed copy of the AI Survival Guide 🤩 Got to catch up with some of my favorite data and AI leaders including Rehgan Bleile (Avon) Heather Harris, Sadie St. Lawrence (color twins today), Liz Crowe, Ph.D.! Enjoyed seeing my team members Kristy Hollingshead and Lauren Burke-McCarthy rock it on stage! Thanks Women in Analytics (WIA) for making my 33rd time around the sun so special ❤️
To view or add a comment, sign in
-
-
Guardrails are the set of rules, assumptions, and filters that sit between users and models to ensure that weird inputs don't get in and weird outputs don't derail users. They are especially important to keeping GenAI safe in production. Check out the article I wrote for Open Data Science Conference (ODSC) explaining guardrails in plain English 👇
Check out the latest blog by AI expert Cal Al-Dhubaib! Discover the importance of guardrails in AI design and how to implement them effectively. Whether you're building AI or using it, this article will help you ensure safety and trustworthiness at every stage of development. Read now to learn more! 👇 https://lnkd.in/dxsHbejU #AI #TechLeadership #AIethics #MachineLearning #TechSafety
To view or add a comment, sign in
-
En route to DataConnect Conference with the one and only Jennifer Strong who coincidentally ended up on the same flight. Our chosen reading materials and the serendipity couldn't be more representative of why we've become friends ❤️! Excited to link up with some of my favorite data folks including Rehgan Bleile (Avon) and a host of our Further crew Emily Clark Lauren Burke-McCarthy Kristy Hollingshead Juliana Novic Rachel R. and Victoria Schurr. Come say hi if you're in the area!
To view or add a comment, sign in
-
-
As enterprises grapple with AI strategy and emerging regulatory pressure, high-performing companies are establishing cross-functional AI councils to align AI use with enterprise strategy, values, and regulatory requirements. Recently one of our healthcare clients was approached to participate in a research partnership to build out a disease specific large language model - thanks to the establish risk management practices, they were able to ask the right questions and implement appropriate guardrails. In my most recent Forbes Technology Council article, I share: • Key roles to include on an AI council (hint: it's more than just tech experts) • Core responsibilities and deliverables • Real-world examples of successful AI councils • Common pitfalls to avoid and how to overcome them Whether you're just starting your AI journey or looking to level up your governance, AI councils are critical for staying competitive while mitigating risks. With incoming regulations like the EU AI Act, having a strong AI governance structure is no longer optional. Check out the full article below to learn how to build a successful AI council for your organization 👇 #AI #AIGovernance #ResponsibleAI #Innovation
Council Post: Why Organizations Need An AI Council—And How To Form One
social-www.forbes.com
To view or add a comment, sign in
-
Another outstanding All Tech Is Human evening in NYC. The theme was "Hacking algorithms around factuality, bias, and misdirection." We got to learn about algorithmic auditing and evaluation from the legendary Dr. Rumman Chowdhury and Jiahao Chen, followed by an outstanding group of panelists. Here are some of the takeaways that stood out to me: • Responsible AI is a profession of tradeoff optimization. Balancing performance with responsibility given fixed resources and constraints is key. The art lies in doing the right thing with powerful models, transparently and fairly, without incurring unsustainable costs. • Algorithmic evaluation isn't new - it's been established in banking for nearly 50 years. However, generative models produce more complex outputs, making it harder to define 'correct'. This sociotechnical challenge requires considering unknown unknowns and underrepresented parts of the digital world. • The scale of models and data has traditionally required technical expertise, but low-code tools can democratize model evaluation, bringing in much needed perspectives from sociology, law, and humanities. And if there's one skill you should be cultivating today? Become a translator. Few individuals have enough mastery of the languages of law, governance, data, AI, and technology - all necessary in today's AI-first world. These insights were especially timely and meaningful as my team and I embark on building out our own AI auditing practice. I'd love to hear from others in the field. What's your best advice for a newly forming AI auditing team?
To view or add a comment, sign in
-
-
Blown away by yet another Anthropic drop! Model performance evaluation on specific tasks often requires highly technical knowledge. With Workbench, you're able to: 1. Get a generated prompt by describing a task like you would to a colleague. 2. Simulate data to test the prompt against. 3. Evaluate the prompt's performance. 4. Modify the prompt and test multiple versions on the same generated data. All code free. Check out the full YouTube demo 👇
Evaluate prompts in the Anthropic Console
https://www.youtube.com/
To view or add a comment, sign in
-
Despite all the recent advances, at its core AI is pattern-matching technology. The efficacy and safety of these systems is dependent on the quality of the examples used to "teach" the underlying models. And here's one guarantee. These models will be wrong some amount of the time. When it comes to heavily regulated industries, it's critical to design workflows that take into consideration the likelihood of a model failing and appropriately integrate humans in the loop. Check out this article I wrote with Open Data Science Conference (ODSC) that highlights: 1. How to identify suitable problems to solve with AI 2. How complexity drives cost and risk 3. How to use these insights to prioritize projects Check out the full article below, cute squirrel pics included 🐿️👇
Unlock the potential of AI in high-risk industries with insights from expert Cal Al-Dhubaib. In his blog, he breaks down the barriers and simplifies the steps to integrate AI in sectors like healthcare, finance, and education. No jargon, just clear guidance to help you get started 👇 https://lnkd.in/dPQKz29c #AI #Innovation #HealthcareAI #FinanceAI #EducationAI #HighRiskIndustries
How To Get Started With Building AI in High-Risk Industries
https://opendatascience.com
To view or add a comment, sign in