Copy from old repo
parent
0a35f2c53b
commit
f3454fa759
44
Daemonizing-The-Server.md
Normal file
44
Daemonizing-The-Server.md
Normal file
@ -0,0 +1,44 @@
|
||||
# Daemonizing The Server
|
||||
|
||||
This page will walk you through the general steps of setting up the server as a daemon/service in Windows (Linux coming later).
|
||||
|
||||
# Windows
|
||||
Currently the server does not support running as a service in Windows. However, you can use a third-party tool like NSSM (Non-Sucking Service Manager) to instantiate a service. Here are the steps to do so:
|
||||
|
||||
1. Download the latest version of NSSM from [here](https://nssm.cc/download)
|
||||
2. Extract the downloaded zip file and navigate to the extracted folder (herein `.\nssm`)
|
||||
3. Open a command prompt with administrative privileges and navigate to `.\nssm\win64`
|
||||
4. Run the following command to open the NSSM service installation GUI:
|
||||
```bash
|
||||
nssm install
|
||||
```
|
||||
5. On the Application tab, set the following options:
|
||||
- Path:
|
||||
- `%WINDIR%\System32\WindowsPowerShell\v1.0\powershell.exe`
|
||||
- Startup directory:
|
||||
- `C:\path\to\nsfw_ai_model_server`
|
||||
- Arguments:
|
||||
- `-ExecutionPolicy ByPass -NoExit -Command "& '<MINICONDA3_DIR>\shell\condabin\conda-hook.ps1'; conda activate '<MINICONDA3_DIR>'; start.ps1"`
|
||||
- Change `<MINICONDA3_DIR>` to the path of your Miniconda3 installation directory (e.g. `C:\Users\username\miniconda3`)
|
||||
- Service name:
|
||||
- `AI Tagger Server` (or any other name you prefer)
|
||||
6. Optionally set the following options under the I/O Tab for logging:
|
||||
- Output file:
|
||||
- `C:\path\to\nsfw_ai_model_server\logs\stdout.log`
|
||||
- Error file:
|
||||
- `C:\path\to\nsfw_ai_model_server\logs\stderr.log`
|
||||
7. Click on the Install service button
|
||||
8. A diaglog box will confirm that the service has been installed. Click OK
|
||||
9. Open the Services application by pressing `Win + R` and typing `services.msc`
|
||||
10. Locate the service named `AI Tagger Server` (or the name you specified) and start it
|
||||
11. The AI Tagger server should now be running as a service
|
||||
|
||||
|
||||
### Uninstalling the service
|
||||
To uninstall the service, follow these steps:
|
||||
1. Open a command prompt with administrative privileges and navigate to `.\nssm\win64`
|
||||
2. Run the following command:
|
||||
```
|
||||
nssm remove "AI Tagger Server"
|
||||
```
|
||||
3. The service should now be uninstalled
|
99
Installing-on-Linux.md
Normal file
99
Installing-on-Linux.md
Normal file
@ -0,0 +1,99 @@
|
||||
# Installing the AI Tagger Server on Linux (Headless)
|
||||
This guide is primarily developed using Ubuntu 22.04, but any other Debian-based distribution should work as well.
|
||||
|
||||
### Install Miniconda3
|
||||
```bash
|
||||
# Update and upgrade the system
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
# Install the required packages
|
||||
sudo apt install -y unzip
|
||||
# Download and install Miniconda3
|
||||
mkdir -p ~/miniconda3
|
||||
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
|
||||
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
|
||||
# Remove the installer
|
||||
rm ~/miniconda3/miniconda.sh
|
||||
# Add Miniconda to PATH
|
||||
echo "export PATH=~/miniconda3/bin:\$PATH" >> ~/.bashrc
|
||||
# Update PATH now so we can use conda right away
|
||||
export PATH=~/miniconda3/bin:$PATH
|
||||
conda init
|
||||
```
|
||||
|
||||
### Install the AI Tagger Server
|
||||
```bash
|
||||
# Create a folder for the server
|
||||
mkdir -p ~/nsfw_ai_model_server
|
||||
# Download the latest server release
|
||||
wget https://github.com/skier233/nsfw_ai_model_server/releases/latest/download/nsfw_ai_model_server.zip -O ~/nsfw_ai_model_server/nsfw_ai_model_server.zip
|
||||
# Unzip the server
|
||||
unzip ~/nsfw_ai_model_server/nsfw_ai_model_server.zip -d ~/nsfw_ai_model_server
|
||||
# Remove the zip file
|
||||
rm ~/nsfw_ai_model_server/nsfw_ai_model_server.zip
|
||||
cd ~/nsfw_ai_model_server
|
||||
mv 2.0/* .
|
||||
# Enable execution of the scripts
|
||||
chmod +x install.sh start.sh update.sh
|
||||
```
|
||||
|
||||
### Install the Model(s)
|
||||
1. Download the relevant models from the [models page](https://github.com/skier233/nsfw_ai_model_server/wiki/AI-Models)
|
||||
2. `scp` or otherwise copy the model(s) to the server
|
||||
- `scp ~/Downloads/gentler_river.zip user@server:~/`
|
||||
3. `unzip` the model(s) and copy their `models` and `config` folders to the `~/nsfw_ai_model_server` folder:
|
||||
- `unzip ~/gentler_river.zip -d ~/nsfw_ai_model_server`
|
||||
4. Optionally, check the `~/nsfw_ai_model_server/config/config.yaml` file and adjust the paths to the models
|
||||
- `nano ~/nsfw_ai_model_server/config/config.yaml`
|
||||
|
||||
### Install the Conda Environment
|
||||
```bash
|
||||
cd ~/nsfw_ai_model_server
|
||||
source ./install.sh
|
||||
```
|
||||
- If you're using premium models, you'll get an error about not finding a license file, it will provide a URL to authenticate with Patron and download the license file.
|
||||
- Download the license file and place it in the `~/nsfw_ai_model_server/models` folder, then start the server again.
|
||||
- Depending on the models used, you may need to do this again (once for Premium models, once for VIP models).
|
||||
|
||||
### Start the Server
|
||||
```bash
|
||||
cd ~/nsfw_ai_model_server
|
||||
source ./start.sh
|
||||
```
|
||||
|
||||
### Update the Server
|
||||
```bash
|
||||
cd ~/nsfw_ai_model_server
|
||||
source ./update.sh
|
||||
```
|
||||
|
||||
### Daemonize the Server
|
||||
There is also a script to daemonize the server, located [here](https://git.vfsh.dev/voidf1sh/skier-ai-tagger/src/branch/main/daemonize.sh), along with one to uninstall the daemon, [here](https://git.vfsh.dev/voidf1sh/skier-ai-tagger/src/branch/main/undaemonize.sh).
|
||||
|
||||
1. Create a systemd service file:
|
||||
```bash
|
||||
sudo nano /etc/systemd/system/nsfw_ai_model_server.service
|
||||
```
|
||||
2. Modify then paste the following into the file:
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Skier233s NSFW AI Model Server
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=<USERNAME>
|
||||
Group=<GROUPNAME>
|
||||
WorkingDirectory=/home/<USERNAME>/nsfw_ai_model_server
|
||||
ExecStart=/bin/bash -c "source /home/<USERNAME>/miniconda3/etc/profile.d/conda.sh && conda activate ai_model_server && python server.py"
|
||||
Restart=on-failure
|
||||
Environment="PATH=/home/<USERNAME>/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
- Replace `<USERNAME>` and `<GROUPNAME>` with your username and groupname respectively.
|
||||
3. Reload the systemd daemon and start the service:
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now nsfw_ai_model_server
|
||||
```
|
79
Using-the-AI-Tagger-Remotely.md
Normal file
79
Using-the-AI-Tagger-Remotely.md
Normal file
@ -0,0 +1,79 @@
|
||||
This guide will walk you through the process of setting up and using Skier's NSFW AI Tagger on a remote (within the LAN) Windows 11 machine.
|
||||
|
||||
**_You will need 20-30GB+ free on the server machine!!_**
|
||||
|
||||
# Anaconda Prep
|
||||
1. Browse to the [Anaconda website](https://www.anaconda.com/download)
|
||||
2. Register (or Skip)
|
||||
3. Scroll down to **Miniconda Installers** and download the latest graphical installer for Windows
|
||||
4. Launch the installer
|
||||
5. Install to your user only
|
||||
6. ONLY select the option to add it to your `PATH` unless you know what you're doing with the other options
|
||||
|
||||
# NSFW AI Server Prep
|
||||
1. Download the latest release from the `skier233/nsfw_ai_model_server` [releases page](https://github.com/skier233/nsfw_ai_model_server/releases)
|
||||
2. Unzip the file to a noted location (herein `.\model_server`)
|
||||
3. Download any models needed from the [models table](https://github.com/skier233/nsfw_ai_model_server/wiki/AI-Models)
|
||||
4. Unzip the desired models
|
||||
5. Copy the `models` and `config` folders from the downloaded model to `.\model_server\`, overwriting old config files if needed
|
||||
6. Optionally check your `.\model_server\config\config.yaml` file to enable the correct pipelines ([see more...](https://github.com/skier233/nsfw_ai_model_server/wiki/AI-Models#switching-the-currently-enabled-model))
|
||||
7. Launch Windows Terminal and select the Anaconda PowerShell environment
|
||||
8. Navigate to `.\model_server\` and execute `.\install.ps1`
|
||||
9. Wait. Awhile. If this is your first time there are many large dependencies that need to download.
|
||||
10. If you are using a paid model, you'll be presented with the Patreon login to verify your license
|
||||
11. Eventually you'll be met with:
|
||||
```bash
|
||||
INFO: Application startup complete.
|
||||
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
|
||||
```
|
||||
12. Ready to go!
|
||||
|
||||
# Starting the Server Later
|
||||
1. Launch Windows Terminal
|
||||
2. Navigate to `.\model_server`
|
||||
3. Run `.\start.ps1`
|
||||
|
||||
# Stash Prep
|
||||
1. Install the plugin (**Settings > Plugins > Community**)
|
||||
2. Open your Stash's config folder
|
||||
3. Edit `plugins/community/ai_tagger/config.py`
|
||||
4. Set `API_BASE_URL` to your Windows 11 machines IP
|
||||
5. Adjust performance options as desired
|
||||
6. Adjust tag options as desired
|
||||
7. Set up path mutation (see Storage Prep FMI). Example:
|
||||
```bash
|
||||
path_mutation = {"/remote-data": "Z:\\porn", "/data", "X:\\porn"}
|
||||
```
|
||||
8. Remember to double-escape backslashes!
|
||||
9. Run `pip install -r requirements.txt` within your Stash's environment
|
||||
- You may need to add `--break-system-packages` if your python environment is managed (eg. in a docker container)
|
||||
|
||||
# Docker Stash Prep
|
||||
1. Get shell access on your Docker host machine (eg. open Terminal, or use `ssh`)
|
||||
2. Launch a shell in the Docker container:
|
||||
```bash
|
||||
docker exec -it Stash /bin/sh
|
||||
```
|
||||
3. Create a new Python virtual environment:
|
||||
```
|
||||
python -m venv /root/.stash/venv
|
||||
ls -al /root/.stash/venv
|
||||
```
|
||||
- Adjust your in-container path to the config folder if it's different for some reason
|
||||
4. Activate the venv:
|
||||
```bash
|
||||
source /root/.stash/venv/bin/activate
|
||||
```
|
||||
- You should see the `(venv)` prefix in your shell prompt
|
||||
5. Install the dependencies:
|
||||
```bash
|
||||
pip install -r /root/.stash/plugins/community/ai_tagger/requirements.txt
|
||||
```
|
||||
6. Open **Stash > Settings > System**
|
||||
7. Edit **Python Executable Path**: `/root/.stash/venv/bin/python`
|
||||
8. Done! Restart the Stash container to double check.
|
||||
|
||||
# Storage Prep
|
||||
1. Make sure your Stash's library folders are on drive or network share that is also accessible to the Windows machine
|
||||
2. For each Library, mount the share in Windows using Map Network Drive
|
||||
3. For each Library, add an entry like `"/server/path": "<Z>:\\share\\path"` in the Stash Plugin Config
|
Loading…
Reference in New Issue
Block a user