Spaces:
Running
Running
Merge Space README with project README
Browse files- .gitattributes +35 -0
- README.md +20 -17
.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -1,3 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# AI Chat & Summarization Web App π€
|
| 2 |
|
| 3 |
A beautiful web-based AI application featuring **Chat Generation** and **Text Summarization** powered by Hugging Face models.
|
|
@@ -41,18 +50,13 @@ python server.py
|
|
| 41 |
|
| 42 |
4. Open your browser to `http://localhost:8000`
|
| 43 |
|
| 44 |
-
## Deploy
|
| 45 |
|
| 46 |
-
### Option 1:
|
| 47 |
|
| 48 |
-
|
| 49 |
-
2. Go to [Render Dashboard](https://dashboard.render.com/)
|
| 50 |
-
3. Click "New +" β "Web Service"
|
| 51 |
-
4. Connect your GitHub repository
|
| 52 |
-
5. Render will automatically detect the `render.yaml` file
|
| 53 |
-
6. Click "Create Web Service"
|
| 54 |
|
| 55 |
-
### Option 2: Manual Deploy
|
| 56 |
|
| 57 |
1. Go to [Render Dashboard](https://dashboard.render.com/)
|
| 58 |
2. Click "New +" β "Web Service"
|
|
@@ -66,12 +70,12 @@ python server.py
|
|
| 66 |
|
| 67 |
5. Click "Create Web Service"
|
| 68 |
|
| 69 |
-
### Important Notes for
|
| 70 |
|
| 71 |
- β οΈ **First startup takes 5-10 minutes** as models download (1.5GB+)
|
| 72 |
- πΎ **Disk space**: Free tier has 512MB, models need ~1.5GB. Use **Starter plan** or higher
|
| 73 |
- π **Auto-sleep**: Free tier sleeps after 15min of inactivity, takes ~30s to wake up
|
| 74 |
-
- π― **Recommendation**: Use **Starter plan
|
| 75 |
- More disk space
|
| 76 |
- Better performance
|
| 77 |
- No auto-sleep
|
|
@@ -125,13 +129,13 @@ LocalInference/
|
|
| 125 |
- **Backend**: FastAPI, PyTorch, Transformers
|
| 126 |
- **Frontend**: HTML5, CSS3, JavaScript (Vanilla)
|
| 127 |
- **Models**: Hugging Face Transformers
|
| 128 |
-
- **Deployment**: Render
|
| 129 |
|
| 130 |
## Troubleshooting
|
| 131 |
|
| 132 |
-
### Models not loading
|
| 133 |
-
-
|
| 134 |
-
- Check logs in
|
| 135 |
|
| 136 |
### Slow first response
|
| 137 |
- Models load on first request, subsequent requests are faster
|
|
@@ -139,7 +143,7 @@ LocalInference/
|
|
| 139 |
|
| 140 |
### Out of memory errors
|
| 141 |
- Reduce `max_new_tokens` in chat requests
|
| 142 |
-
- Use
|
| 143 |
|
| 144 |
## License
|
| 145 |
|
|
@@ -152,4 +156,3 @@ Pull requests are welcome! For major changes, please open an issue first.
|
|
| 152 |
---
|
| 153 |
|
| 154 |
Made with β€οΈ using Hugging Face Transformers
|
| 155 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: Local Inference
|
| 3 |
+
emoji: π
|
| 4 |
+
colorFrom: pink
|
| 5 |
+
colorTo: gray
|
| 6 |
+
sdk: docker
|
| 7 |
+
pinned: false
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
# AI Chat & Summarization Web App π€
|
| 11 |
|
| 12 |
A beautiful web-based AI application featuring **Chat Generation** and **Text Summarization** powered by Hugging Face models.
|
|
|
|
| 50 |
|
| 51 |
4. Open your browser to `http://localhost:8000`
|
| 52 |
|
| 53 |
+
## Deploy Options
|
| 54 |
|
| 55 |
+
### Option 1: Hugging Face Spaces (Docker)
|
| 56 |
|
| 57 |
+
See [DEPLOY_TO_SPACES.md](DEPLOY_TO_SPACES.md) for detailed instructions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
|
| 59 |
+
### Option 2: Render Manual Deploy
|
| 60 |
|
| 61 |
1. Go to [Render Dashboard](https://dashboard.render.com/)
|
| 62 |
2. Click "New +" β "Web Service"
|
|
|
|
| 70 |
|
| 71 |
5. Click "Create Web Service"
|
| 72 |
|
| 73 |
+
### Important Notes for Deployment
|
| 74 |
|
| 75 |
- β οΈ **First startup takes 5-10 minutes** as models download (1.5GB+)
|
| 76 |
- πΎ **Disk space**: Free tier has 512MB, models need ~1.5GB. Use **Starter plan** or higher
|
| 77 |
- π **Auto-sleep**: Free tier sleeps after 15min of inactivity, takes ~30s to wake up
|
| 78 |
+
- π― **Recommendation**: Use **Starter plan** for:
|
| 79 |
- More disk space
|
| 80 |
- Better performance
|
| 81 |
- No auto-sleep
|
|
|
|
| 129 |
- **Backend**: FastAPI, PyTorch, Transformers
|
| 130 |
- **Frontend**: HTML5, CSS3, JavaScript (Vanilla)
|
| 131 |
- **Models**: Hugging Face Transformers
|
| 132 |
+
- **Deployment**: Hugging Face Spaces, Render
|
| 133 |
|
| 134 |
## Troubleshooting
|
| 135 |
|
| 136 |
+
### Models not loading
|
| 137 |
+
- Check disk space in deployment platform
|
| 138 |
+
- Check logs in platform dashboard
|
| 139 |
|
| 140 |
### Slow first response
|
| 141 |
- Models load on first request, subsequent requests are faster
|
|
|
|
| 143 |
|
| 144 |
### Out of memory errors
|
| 145 |
- Reduce `max_new_tokens` in chat requests
|
| 146 |
+
- Use plan with more RAM
|
| 147 |
|
| 148 |
## License
|
| 149 |
|
|
|
|
| 156 |
---
|
| 157 |
|
| 158 |
Made with β€οΈ using Hugging Face Transformers
|
|
|