• AI Rundown
  • Posts
  • AI Rundown by Lightscape Partners – 02/05/24

AI Rundown by Lightscape Partners – 02/05/24

OpenAI prepares for a "family-friendly" GPT experience, Mistral AI's new LLM shows promising results, the EU's AI act gains more momentum, and much more.

Image Credit: Ted Wagner made with DALL·E

Good morning, happy Monday, and welcome back to another edition of the AI Rundown by Lightscape Partners.

A lot happened last week in AI, so let’s not waste any time.

OpenAI partnered with Common Sense Media to outline guidelines in creating a “family-friendly” GPT experience, open-source company Mistral confirms their newest model continues to close the gap on closed model GPT-4 and OpenAI, and the EU’s AI Act clears its final major hurdle towards full adoption.

Keep reading to see what else went on last week in AI.

Top AI Developments of the Week

OpenAI and Common Sense Media's Partnership for AI Guidelines. Link.

OpenAI collaborates with Common Sense Media to develop guidelines and educational materials for AI usage among parents, educators, and young adults. This partnership aims to curate "family-friendly" GPTs in OpenAI's GPT Store, aligning with Common Sense's safety and suitability standards.

  • The initiative targets the responsible use of AI tools by children, preteens, and teens.

  • OpenAI will work with Common Sense Media to ensure that the GPTs available in the GPT Store are suitable for younger audiences.

  • The collaboration seeks to raise awareness among parents and educators about GenAI tools and their impact.

  • OpenAI faces regulatory scrutiny over potential harms from its AI tools, making this partnership a step towards addressing these concerns.

This partnership highlights the growing need for responsible AI use guidelines, especially for younger users. It aims to balance the innovative potential of AI with the need for safety and ethical considerations. By involving an organization like Common Sense Media, OpenAI demonstrates a commitment to addressing public and regulatory concerns about AI's impact on society.

Mistral CEO Confirms Leak of New AI Model Rivaling GPT-4. Link.

Mistral, a leading open-source AI company, recently confirmed the leak of a new large language model (LLM) named "miqu-1-70b" by an early access customer's employee. This model, posted on HuggingFace and 4chan, demonstrates performance nearing that of OpenAI's GPT-4.

  • The leak originated from an "over-enthusiastic employee" of one of Mistral's early access customers and was shared on HuggingFace and 4chan.

  • Preliminary tests suggest that "miqu-1-70b" approaches the performance levels of GPT-4, indicating a significant advancement in open-source AI capabilities.

  • Mistral's CEO, Arthur Mensch, acknowledged the leak and hinted at further advancements beyond the leaked model, urging the community to "stay tuned."

The leak of "miqu-1-70b" signifies a potential shift in the AI landscape, highlighting the rapid advancements within the open-source AI community. If Mistral's new models match or exceed GPT-4's performance, it could introduce competitive pressure on OpenAI, particularly in terms of model accessibility and usage costs. This development underscores the dynamic nature of AI research and the increasing importance of open-source contributions to the field.

EU's AI Act Clears Final Major Hurdle Towards Adoption. Link.

The European Union's AI Act, aimed at regulating artificial intelligence applications based on risk assessment, has successfully passed its last significant obstacle towards official adoption after Member State representatives voted to confirm the draft law's final text.

  • The AI Act delineates certain AI applications as unacceptable risks, such as social scoring systems.

  • It introduces governance measures for high-risk AI applications that could impact health, safety, fundamental rights, and more.

  • The act mandates transparency for AI applications like chatbots, while low-risk AI applications will not be covered by the law.

  • All 27 EU Member State ambassadors backed the text, moving the regulation closer to adoption after overcoming potential opposition led by a few countries, including France.

The EU's AI Act is set to redefine the regulatory landscape for AI applications within the union, balancing innovation with the need to mitigate potential risks associated with AI technologies. With its risk-based approach, the act aims to protect health, safety, and fundamental rights without stifling technological advancements. The unanimous support for the final text marks a significant step towards its adoption, with implementation phases designed to allow for adjustment and compliance by affected entities. This legislation could serve as a benchmark for AI regulation globally, emphasizing the EU's commitment to ethical and responsible AI development and use.

Venture

Coris Secures $3.7M for AI-Driven SMB Risk Management. Link.

California-based fintech startup Coris has secured $3.7 million in seed funding to enhance its AI-powered risk management platform for small and medium-sized businesses (SMBs). This funding will be directed to modernizing outdated risk evaluation processes with automation and artificial intelligence.

  • The seed round was co-led by Lux Capital and Exponent Capital, with contributions from Y Combinator, Blank Ventures, and several experienced fintech founders.

  • Coris's platform uses large language models to analyze unstructured data for tasks like automated underwriting and fraud prevention, with products like CorShield for combating impersonation fraud.

  • The investment will support the rollout of Coris's products, including CorShield and MerchantProfiler, which promise to revolutionize SMB risk assessment by leveraging AI to provide real-time business verifications and fraud prevention.

Coris's entry into the SMB risk management sector represents a significant shift towards AI-driven solutions that streamline onboarding, reduce fraud, and improve operational efficiency. With its innovative approach and strong backing, Coris is poised to transform how companies manage risk, offering a scalable and efficient alternative to traditional methods.

Codeium Secures $65M for AI Development Tools. Link.

Codeium, a California-based AI startup, has raised $65 million in a Series B funding round, reaching a valuation of $500 million. The funding, led by Kleiner Perkins with participation from Greenoaks, General Catalyst, and others, will be used to expand its team and enhance its AI-powered coding toolkit.

  • Codeium's toolkit, which aids in software development by providing intelligent code suggestions, is used by over 300,000 developers and contributes to over 44% of newly committed code.

  • Unlike other AI coding tools, Codeium offers personalized, context-aware code suggestions across more than 70 languages and 40 Integrated Development Environments (IDEs). It can be self-hosted or used as SOC2 Type 2-compliant SaaS.

  • With the new funding, Codeium aims to cover the entire software development lifecycle, moving beyond code writing and addressing system design, code maintenance, and fixing security vulnerabilities.

This funding round positions Codeium as a significant player in the AI-assisted software development sector, competing with established names like GitHub Copilot and Amazon Sagemaker. By focusing on security and integration within developers' existing workflows, Codeium is poised to significantly enhance productivity and tackle the challenges of modern software development.

Semron Secures $7.9M for Innovative AI Chips. Link.

Semron, a Dresden, Germany-based startup, has raised $7.9 million to develop highly efficient AI chips utilizing 3D packaging technology, aiming to revolutionize AI processing in mobile devices.

  • Semron's breakthrough lies in its 3D semiconductor technology, promising up to 20 times more efficient chip operation. The proprietary CapRAM technology uses a 'memcapacitor' for significantly reduced energy consumption.

  • The funding round, led by Join Capital and featuring contributions from SquareOne, OTB Ventures, and Onsight Ventures, will fuel Semron's ambition to redefine mobile device AI chips.

  • The company envisions enabling vast AI models on compact silicon, facilitating advanced applications like smart contact lenses, by leveraging their chips' capacity to support significantly larger AI models without overheating.

Semron's approach could dramatically reduce the energy consumption and cost of running sophisticated AI models, offering a scalable solution as the demand for AI capabilities grows. By targeting the mobile device market with their innovative 3D-packed AI chips, Semron positions itself at the forefront of a potential industry shift towards more efficient, powerful, and cost-effective AI processing solutions.

Metal Launches AI Assistant for Financial Sector. Link.

Metal, a startup from Y Combinator, introduces an AI tool designed to streamline the tedious tasks of financial analysis and portfolio management for venture capital and private equity firms.

  • The AI assistant automates the collection, compilation, and analysis of financial documents like 10-Ks, 10-Qs, and 8-Ks, as well as presentation decks and spreadsheets, for publicly traded companies.

  • Aimed at fund analysts, venture capital, and private equity firms, Metal's AI tool facilitates investment research, due diligence, and portfolio monitoring.

  • Co-founded by Taylor Lowe, a former product manager at Meta, Metal was part of Y Combinator's accelerator program before launching this product.

Metal's AI assistant promises to significantly reduce the manual effort involved in financial analysis, allowing fund managers and analysts to make more informed investment decisions efficiently. This innovation could transform how financial services and private equity funds approach research and portfolio management, making the process faster and more accurate.

Software + Hardware

Meta AI Unveils Code Llama 70B: A New Era in Code Generation. Link.

Meta AI introduces Code Llama 70B, an advanced open-source AI model for code generation that can write and edit code across various programming languages like Python, C++, Java, and PHP. This model represents a significant leap in automating software development processes.

  • Trained on 500 billion tokens of code, Code Llama 70B boasts a larger context window and has been fine-tuned for specific coding tasks, offering unprecedented accuracy and adaptability.

  • The CodeLlama-70B-Instruct variant scored 67.8 on HumanEval, showcasing superior functional correctness and logic in code generation, on par with or surpassing other leading models.

  • Includes variants optimized for Python and instructed code generation, enabling a wide range of tasks from web scraping to machine learning applications.

  • Freely available for both research and commercial use, with support across multiple platforms and frameworks.

Code Llama 70B paves the way for more efficient and creative software development, making coding more accessible to a broader audience. By lowering the barriers to coding and enhancing AI's ability to understand and generate code, Meta AI is setting a new standard for the future of automated software development. This development could revolutionize how we approach coding, foster innovation, and potentially lead to new applications and advancements in technology.

Nightshade Tool Surges in Popularity Among Artists with 250,000 Downloads in 5 Days. Link.

Nightshade, a disruptive tool designed by the University of Chicago to help artists protect their artworks from unauthorized AI training, has seen an astonishing 250,000 downloads in just five days.

  • Developed to "poison" AI models by subtly altering artwork images, making them misleading for training purposes.

  • Although primarily aimed at artists, the tool's vast download numbers suggest a worldwide demand for solutions against AI scraping.

  • The creators are considering combining Nightshade with their earlier tool, Glaze, for enhanced protection and may release an open-source version.

Nightshade's rapid adoption reflects growing concerns among artists about their work's unauthorized use in training AI models. By offering a practical countermeasure, Nightshade empowers creators to safeguard their intellectual property, signaling a potential shift in how artists interact with AI technologies. This movement could prompt AI developers to reconsider their data collection and training practices, fostering a landscape where artists' rights are more prominently respected.

Google's Bard to Undergo Major Revamp and Rebranding as Gemini. Link.

Google is set to introduce significant updates to Bard, its AI-driven conversational tool, including a rebranding to "Gemini". The changes were hinted at through a leaked changelog by Android app developer Dylan Roussel, indicating the introduction of voice chat features and a new "Ultra 1.0" model.

  • Bard will transition to "Gemini", aligning with the name of Google's competitive model launched last year against OpenAI’s GPT-4.

  • The update will introduce voice interaction capabilities to Gemini, enhancing user engagement and accessibility.

  • A premium offering, "Gemini Advanced", will provide ChatGPT Plus-like functionalities including file upload features, under a paid plan.

The rebranding and enhancement of Bard to Gemini mark Google's aggressive move to solidify its position in the AI and conversational model market. By introducing voice chat and advanced file handling features, Google aims to enhance user experience and expand Gemini’s utility, directly competing with established players like OpenAI's ChatGPT. These developments could significantly influence consumer preferences and the competitive dynamics within the AI conversational tools sector.

Ethics in AI

Deepfake Technology Leads to $25.6 Million Scam in Conference Call. Link.

A company was defrauded of $25.6 million through a sophisticated scam involving a deepfake conference call. The South China Morning Post reported that scammers used deepfake technology to impersonate company officers, including the CFO, convincing an employee to transfer funds to five separate Hong Kong bank accounts.

  • The employee was invited to a video call filled with deepfaked representations of company officials who then directed the transfer of funds.

  • This incident highlights the increasing sophistication and danger of frauds utilizing deepfake technology, which has also been used in crypto scam ads on YouTube and blackmail attempts.

  • The total loss amounted to approximately $25.6 million, marking a significant financial impact due to technological exploitation.

This scam underscores the urgent need for companies to enhance their security protocols and employee training regarding digital communications and verification processes. As deepfake technology becomes more accessible and convincing, the risk of similar scams is likely to increase, posing a challenge to corporate security and trust in digital interactions.

OpenAI's ChatGPT Faces GDPR Violation Allegations in Italy. Link.

Italy's data protection authority (Garante) has issued a notice of objection to OpenAI, suspecting ChatGPT of violating the European Union's General Data Protection Regulation (GDPR). OpenAI is given 30 days to respond to the allegations, which could lead to significant fines and operational changes if confirmed.

  • Concerns include the lack of a valid legal basis for collecting and processing personal data for AI model training.

  • Suspected breaches involve articles related to data processing principles, consent, child safety, transparency, and data minimization.

  • If found guilty, OpenAI might face fines up to €20 million or 4% of its global annual turnover and might have to alter its data processing methods or withdraw services in the EU.

  • ChatGPT was briefly suspended in Italy last year due to these concerns but resumed after addressing some issues. However, the investigation continued.

This development stresses the growing regulatory scrutiny over AI technologies, particularly regarding data privacy and protection. OpenAI's response and the final decision by Garante could significantly influence how AI companies handle personal data under GDPR. It also highlights the challenges AI firms face in balancing innovation with compliance in different regulatory environments.

Microsoft Tightens Safety Measures on AI Image Generator Following Taylor Swift Deep Fake Controversy. Link.

Microsoft has addressed a loophole in its AI image generator that allowed explicit images of celebrities, including Taylor Swift, to be created and circulated. This move follows a surge in such content becoming a trending topic online.

  • Users manipulated Microsoft's Designer AI image generator to circumvent simple name blocks, leading to the creation and distribution of graphic AI-generated celebrity images.

  • Microsoft CEO Satya Nadella emphasized the importance of guardrails, and the company has since closed the loophole. Sarah Bird, Microsoft's Responsible AI Engineering Lead, confirmed the implementation of strengthened safety systems.

  • Microsoft focuses on ensuring a respectful and safe experience for all users and continues to investigate and improve its services to prevent misuse.

While Microsoft’s prompt action showcases its commitment to responsible AI, reactive measures will not undo the psychological harm created by non-consensually generated pornographic deep fakes.

Other

Hugging Face Introduces “Two-Click” Custom Chatbot Creation. Link.

Hugging Face has streamlined the process of creating custom chatbots, making it possible to do so in just "two clicks" through the Hugging Chat Assistant. This new feature allows users to quickly design and share their chatbots, enhancing the accessibility and customization of conversational AI.

  • The Hugging Chat Assistant simplifies the chatbot development process, enabling users to launch their custom chatbots without extensive coding knowledge.

  • Users have the flexibility to utilize any available open Large Language Model (LLM), such as Llama2 or Mistral, for powering their chatbots.

This development positions Hugging Face as a competitive platform for custom chatbot creation, offering an alternative to OpenAI's GPTs feature. By enabling easier access to chatbot development, Hugging Face fosters innovation and experimentation within the AI community, potentially leading to a wider range of applications and services powered by conversational AI.

OpenAI's Study on AI's Impact on Biological Threat Creation. Link.

OpenAI has published a study exploring the potential of AI, specifically its GPT-4 model, to aid in creating biological threats. The findings indicate that GPT-4 offers only a minimal improvement in the accuracy of developing biological threats over existing internet resources.

  • The study involved 100 participants, split evenly between biology experts with PhDs and students with some university-level biology education.

  • Participants were divided into groups with either just internet access (control) or internet plus GPT-4 access (treatment) to perform tasks related to biological threat creation.

  • GPT-4 showed a slight accuracy improvement for student participants but did not significantly impact other metrics, including innovation, time taken, and self-rated difficulty. The model sometimes provided erroneous or misleading information.

This study, part of OpenAI’s Preparedness Framework, highlights the limited role of advanced AI in enhancing the creation of biological threats beyond existing capabilities. It underscores the importance of ongoing risk assessment and mitigation efforts to manage the potential misuse of AI technologies, while also pointing out the limitations and inaccuracies of current AI models in complex domains like biotechnology.

Thank you for reading the AI Rundown by Lightscape Partners. Stay tuned for the latest updates in the world of Artificial Intelligence!