Mystical Palm Reader AI

Mystical Palm Reader AI

Mystical Palm Reader AI: Unveiling Your Destiny with Pinata and AI

This is a submission for the The Pinata Challenge

What I Built

I developed “Palm Reader AI,” an innovative (and mostly fun) webapp that combines the mystical art of palm reading with cutting-edge AI technology from HuggingFace and the Pinata decentralized storage. This application allows users to upload images of their palms, which are then analyzed by an AI model to generate personalized “readings.” The palm images and the audio versions of the readings are stored securely using Pinata’s IPFS solution, ensuring decentralized and persistent storage of user data.

Key features include:

AI-powered palm analysis
Text-to-speech conversion of readings
Decentralized storage of images and audio files
Interactive UI with real-time feedback
Gallery of past readings

Demo

You can try out the Palm Reader AI live at: https://palm-reader-ai2.onrender.com/

Here are some screenshots of the application in action:

My Code

The full source code for this project is available on GitHub:


ehernandezvilla
/
palm-reader-ai

DEV Pinata challenge – Palm Reader AI

Palm Reader AI 🔮🖐️


Image: Unsplash – Viva Luna Studios

Palm Reader AI is an innovative (but mostly fun) web application that uses artificial intelligence to analyze palm images and provide mystical readings. This project was developed as part of the Dev Pinata challenge, showcasing the integration of AI technologies with decentralized storage solutions.

🌟 Features

Upload palm images for AI analysis
Receive personalized palm readings
Text-to-speech functionality for audio readings
Gallery of past readings
Responsive and mystical UI design

🚀 Tech Stack

Frontend: Next.js with React

Styling: Tailwind CSS

UI Components: shadcn/ui

Animations: Framer Motion

Icons: Lucide React

API Requests: Axios

Text-to-Speech: Hugging Face Inference API

Decentralized Storage: Pinata IPFS

🧠 AI Models

Palm Analysis: facebook/detr-resnet-50 (Object Detection)

Text Generation: meta-llama/Llama-2-7b-chat-hf

Text-to-Speech: espnet/kan-bayashi_ljspeech_vits

🏗️ Project Structure

components/: React components (Hero, FileUpload, PalmReading, etc.)

pages/: Next.js pages…

More Details

Pinata played a crucial role in the development of this application. Here’s how I utilized Pinata’s services:

Image Storage: When a user uploads a palm image, it’s immediately stored on IPFS through Pinata. The returned IPFS hash is then used to retrieve the image for AI analysis.

const audioFile = new File([audioBlob], reading.wav, { type: audio/wav });
const audioUploadResult = await uploadToPinata(audioFile);

Audio Storage: After generating the palm reading, the application uses text-to-speech to create an audio version. This audio file is also stored on IPFS via Pinata.

const audioResponse = await hf.textToSpeech({
model: espnet/kan-bayashi_ljspeech_vits,
inputs: reading,
});
const audioBlob = new Blob([audioResponse], { type: audio/wav });
const audioUploadResult = await uploadToPinata(audioFile);

Content Retrieval: The application uses Pinata’s IPFS gateway to retrieve stored images and audio files for display and playback in the user interface.

const imageUrl = `https://gateway.pinata.cloud/ipfs/${ipfsHash}`;

Persistent Storage: By using Pinata’s IPFS solution, all user data (palm images and audio readings) are stored in a decentralized manner, ensuring data persistence and availability.

The integration of Pinata’s services allowed me to create a robust, decentralized storage solution for user-generated content, which is critical for the functionality and user experience of the Palm Reader AI application.

Please follow and like us:
Pin Share