Compare commits

...

146 Commits

Author SHA1 Message Date
tuna2134@コマリン親衛隊
96b53d42cd Merge pull request #210 from neodyland/dependabot/cargo/ort-d1ebde9 2025-06-02 17:24:06 +09:00
dependabot[bot]
9765ef51d2 Bump ort from 4745bb3 to d1ebde9
Bumps [ort](https://github.com/pykeio/ort) from `4745bb3` to `d1ebde9`.
- [Release notes](https://github.com/pykeio/ort/releases)
- [Commits](4745bb3a4a...d1ebde95d3)

---
updated-dependencies:
- dependency-name: ort
  dependency-version: d1ebde95d386513fea836593815e8f86f7b96a85
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-02 08:07:32 +00:00
tuna2134@コマリン親衛隊
e68f58d698 Fix year 2025-05-27 07:25:04 +09:00
tuna2134@コマリン親衛隊
2124fe4650 Merge pull request #207 from neodyland/dependabot/cargo/ort-4745bb3 2025-05-27 06:57:31 +09:00
tuna2134@コマリン親衛隊
0217c0a4d5 Merge pull request #205 from neodyland/dependabot/cargo/tokio-1.45.1 2025-05-27 06:57:06 +09:00
tuna2134@コマリン親衛隊
1de09597f5 Merge pull request #206 from neodyland/dependabot/cargo/pyo3-0.25.0 2025-05-27 06:56:53 +09:00
tuna2134@コマリン親衛隊
38d86c9249 Merge pull request #208 from neodyland/dependabot/cargo/npyz-0.8.4 2025-05-27 06:56:34 +09:00
dependabot[bot]
ddc132b27b Bump npyz from 0.8.3 to 0.8.4
Bumps [npyz](https://github.com/ExpHP/npyz) from 0.8.3 to 0.8.4.
- [Changelog](https://github.com/ExpHP/npyz/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ExpHP/npyz/compare/0.8.3...0.8.4)

---
updated-dependencies:
- dependency-name: npyz
  dependency-version: 0.8.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-26 21:53:38 +00:00
dependabot[bot]
558cd24677 Bump ort from 90afc70 to 4745bb3
Bumps [ort](https://github.com/pykeio/ort) from `90afc70` to `4745bb3`.
- [Release notes](https://github.com/pykeio/ort/releases)
- [Commits](90afc700d0...4745bb3a4a)

---
updated-dependencies:
- dependency-name: ort
  dependency-version: 4745bb3a4a1b5ab7f2c807b8989638627068cdf9
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-26 21:52:29 +00:00
dependabot[bot]
6657b06786 Bump pyo3 from 0.24.2 to 0.25.0
Bumps [pyo3](https://github.com/pyo3/pyo3) from 0.24.2 to 0.25.0.
- [Release notes](https://github.com/pyo3/pyo3/releases)
- [Changelog](https://github.com/PyO3/pyo3/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pyo3/pyo3/compare/v0.24.2...v0.25.0)

---
updated-dependencies:
- dependency-name: pyo3
  dependency-version: 0.25.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-26 21:52:04 +00:00
dependabot[bot]
2a8c9bafde Bump tokio from 1.45.0 to 1.45.1
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.45.0 to 1.45.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.45.0...tokio-1.45.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.45.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-26 21:51:36 +00:00
tuna2134@コマリン親衛隊
d7065ac6eb Merge pull request #204 from neodyland/tuna2134-patch-2
add: dependabot
2025-05-27 06:48:32 +09:00
tuna2134@コマリン親衛隊
0b1dbe4991 Create dependabot.yml 2025-05-27 06:47:06 +09:00
googlefan256
1ad588bfcf hotfix: path miss 2025-05-10 11:33:07 +09:00
googlefan256
9733ba95fa fix: the unknown error 2025-05-09 20:28:04 +09:00
googlefan256
843c16995c fix: cuda typo 2025-05-09 20:19:30 +09:00
googlefan256
f0821ea957 Merge pull request #201 from neodyland/googlefan256/fix-lot
fix: lot of
2025-05-09 20:10:08 +09:00
googlefan256
abc9cec7c7 fix: reduced actions 2025-05-09 20:07:18 +09:00
googlefan256
19e6b7f0e6 fix: ci 2025-05-09 20:05:46 +09:00
googlefan256
451f4497b6 fix: lot of 2025-05-09 17:01:02 +09:00
tuna2134@コマリン親衛隊
e5e92f6211 Merge pull request #141 from neodyland/renovate/typescript-5.x-lockfile
chore(deps): update dependency typescript to v5.8.3
2025-04-28 16:10:28 +09:00
tuna2134@コマリン親衛隊
b835577325 Merge pull request #190 from neodyland/renovate/anyhow-1.x-lockfile
chore(deps): update rust crate anyhow to v1.0.98
2025-04-28 16:08:54 +09:00
tuna2134@コマリン親衛隊
3caf93441a Merge pull request #191 from neodyland/renovate/thiserror-2.x-lockfile
fix(deps): update rust crate thiserror to v2.0.12
2025-04-28 16:08:30 +09:00
tuna2134@コマリン親衛隊
4deefc596b Merge pull request #192 from neodyland/renovate/serde_json-1.x-lockfile
fix(deps): update rust crate serde_json to v1.0.140
2025-04-22 06:42:24 +09:00
tuna2134@コマリン親衛隊
9174aa9b11 Merge pull request #196 from neodyland/renovate/pyo3-0.x
fix(deps): update rust crate pyo3 to 0.24.0
2025-04-22 06:33:40 +09:00
tuna2134@コマリン親衛隊
6bccf0468b Merge pull request #199 from neodyland/renovate/env_logger-0.x-lockfile
chore(deps): update rust crate env_logger to v0.11.8
2025-04-22 06:33:22 +09:00
tuna2134@コマリン親衛隊
bbb3f0003b Merge pull request #200 from neodyland/renovate/ureq-3.x-lockfile
chore(deps): update rust crate ureq to v3.0.11
2025-04-22 06:33:02 +09:00
googlefan256
46de7a9d3f Update README.md for notice 2025-04-18 09:47:02 +09:00
renovate[bot]
252b27de48 chore(deps): update rust crate ureq to v3.0.11 2025-04-15 22:47:28 +00:00
renovate[bot]
1dd3e02562 chore(deps): update rust crate anyhow to v1.0.98 2025-04-14 03:33:49 +00:00
renovate[bot]
4990261ecd chore(deps): update dependency typescript to v5.8.3 2025-04-05 01:50:04 +00:00
renovate[bot]
e873892223 chore(deps): update rust crate env_logger to v0.11.8 2025-04-01 22:07:52 +00:00
tuna2134@コマリン親衛隊
f081b2ed22 Merge pull request #198 from tuna2134/voicevox
Editor APIを追加
2025-03-31 23:56:25 +09:00
Masato Kikuchi
103eb51ca8 delete 2025-03-31 23:39:44 +09:00
Masato Kikuchi
01541ff381 delete unimport 2025-03-31 23:36:10 +09:00
Masato Kikuchi
70c2341afd format 2025-03-31 23:35:51 +09:00
Masato Kikuchi
a5d783bd65 fix: bug 2025-03-31 23:35:39 +09:00
Masato Kikuchi
633dfc305e delete mut 2025-03-31 23:04:23 +09:00
Masato Kikuchi
53d7daf11a fix 2025-03-31 23:03:30 +09:00
Masato Kikuchi
5abfe732e4 fix bug 2025-03-31 22:45:55 +09:00
tuna2134@コマリン親衛隊
48aef6cef4 tts.rs を更新 2025-03-29 11:02:23 +09:00
tuna2134@コマリン親衛隊
64fc74eee6 fix: bug 2025-03-29 10:58:24 +09:00
Masato Kikuchi
6e01103c5d format 2025-03-29 10:50:40 +09:00
Masato Kikuchi
00e95cd77c feat: synthesis 2025-03-29 10:50:30 +09:00
Masato Kikuchi
01f2aaa406 no voicevox 2025-03-28 20:14:51 +09:00
Masato Kikuchi
3785faf81e fix 2025-03-28 20:08:07 +09:00
Masato Kikuchi
70e16f95ad fix: voicevox化は難しいので、独自のエディター開発をする。 2025-03-28 20:06:00 +09:00
Masato Kikuchi
a67df43fc7 fix 2025-03-27 14:42:43 +09:00
Masato Kikuchi
472d1c600f fix: add route 2025-03-27 13:59:00 +09:00
Masato Kikuchi
acf94a1283 format 2025-03-27 13:53:52 +09:00
Masato Kikuchi
dd5c536f39 feat: g2kana_tone 2025-03-27 13:53:27 +09:00
Masato Kikuchi
07637f587d fix: type 2025-03-27 13:23:53 +09:00
Masato Kikuchi
e8dbf956e1 fix: forget to give return 2025-03-27 13:21:07 +09:00
Masato Kikuchi
2687af1a9b clippy 2025-03-27 13:18:22 +09:00
Masato Kikuchi
e915e2bc84 feat: phone_tone_to_kana 2025-03-27 13:17:20 +09:00
Masato Kikuchi
22ed557395 oh 2025-03-27 09:59:08 +09:00
Masato Kikuchi
b8f0477318 feat: audio query request 2025-03-26 16:30:31 +09:00
Masato Kikuchi
f4de3e15ae initial commit: voicevox 2025-03-26 16:14:29 +09:00
Masato Kikuchi
fc944b9d33 split the code for support voicevox 2025-03-26 15:14:22 +09:00
renovate[bot]
4255e15748 fix(deps): update rust crate pyo3 to 0.24.0 2025-03-10 03:15:24 +00:00
renovate[bot]
8bf3906105 fix(deps): update rust crate serde_json to v1.0.140 2025-03-03 11:01:43 +00:00
renovate[bot]
1d80eda325 fix(deps): update rust crate thiserror to v2.0.12 2025-03-03 06:07:28 +00:00
tuna2134@コマリン親衛隊
99a4b130af Merge pull request #189 from tuna2134/renovate/ureq-3.x-lockfile 2025-03-01 16:59:08 +09:00
renovate[bot]
d430a6cb51 chore(deps): update rust crate ureq to v3.0.8 2025-02-28 18:57:35 +00:00
tuna2134@コマリン親衛隊
61aae68d2d Merge pull request #188 from tuna2134/tuna2134-patch-1
Fix example
2025-02-26 15:03:18 +09:00
tuna2134@コマリン親衛隊
abb40d4d2d Fix example 2025-02-26 14:38:09 +09:00
tuna2134@コマリン親衛隊
adb699efe7 Merge pull request #186 from tuna2134/renovate/pyo3-0.x-lockfile 2025-02-26 05:35:36 +09:00
renovate[bot]
00fa8025d7 fix(deps): update rust crate pyo3 to v0.23.5 2025-02-25 13:49:05 +00:00
googlefan256
38c5471dcc Merge pull request #184 from googlefan256/main
Pythonビルドの修正
2025-02-23 17:07:05 +09:00
googlefan256
28e116e67d Update CI.yml 2025-02-23 16:19:20 +09:00
Googlefan
5127d48260 fix 2025-02-23 07:04:20 +00:00
Googlefan
f6e9a52b13 fix 2025-02-23 06:49:49 +00:00
Googlefan
9b7de85c46 fix 2025-02-23 06:36:22 +00:00
Googlefan
45a221af23 fix 2025-02-23 06:22:19 +00:00
Googlefan
97541d6a28 fix 2025-02-23 06:08:26 +00:00
Googlefan
640ef16c4b miss 2025-02-23 05:49:06 +00:00
Googlefan
2b5bc27db7 tell me that I can use arm64 runner, f*** microsoft, update the doc NOW 2025-02-23 05:45:23 +00:00
Googlefan
4d00fcd0bc fix 2025-02-23 05:14:56 +00:00
Googlefan
6fc0a47a78 fix 2025-02-23 05:08:13 +00:00
Googlefan
80e5ddee5b fix 2025-02-23 05:05:40 +00:00
Googlefan
143d05c068 fix 2025-02-23 05:02:36 +00:00
Googlefan
14d604091b fix: shortcut 2025-02-23 04:58:12 +00:00
Googlefan
6fc97b1f33 revert 2025-02-23 04:52:02 +00:00
Googlefan
6c5ea9adce fix: reduce download 2025-02-23 04:48:52 +00:00
Googlefan
e262694702 fix 2025-02-23 04:45:58 +00:00
Googlefan
554b82a504 fix 2025-02-23 04:34:48 +00:00
Googlefan
0a911105a3 fix 2025-02-23 04:32:22 +00:00
Googlefan
e5a4774e1a fix 2025-02-23 04:29:15 +00:00
Googlefan
f036417046 fix 2025-02-23 04:27:14 +00:00
Googlefan
3e0c24e0ec fix 2025-02-23 04:16:23 +00:00
Googlefan
17c1a3467a fix 2025-02-23 04:14:49 +00:00
Googlefan
db954ff710 fix 2025-02-23 04:13:51 +00:00
Googlefan
3d2f36a0bf fix 2025-02-23 04:01:47 +00:00
Googlefan
6a8b64208c fix 2025-02-23 04:00:08 +00:00
Googlefan
f33791cf67 fix 2025-02-23 03:56:22 +00:00
Googlefan
96eb51cf04 fix 2025-02-23 03:53:46 +00:00
Googlefan
0f11b9a192 fix 2025-02-23 03:49:53 +00:00
Googlefan
eefd5b723c fix 2025-02-23 03:46:48 +00:00
Googlefan
c86a79cce5 fix 2025-02-23 03:22:13 +00:00
Googlefan
fd7ba84eef fix 2025-02-23 03:19:36 +00:00
googlefan256
6afa667f2e Update and rename build.yml to CI.yml 2025-02-23 12:17:53 +09:00
googlefan256
70b5852d1b Update build.yml 2025-02-23 12:12:48 +09:00
googlefan256
208ac216b5 Update and rename CI.yml to build.yml 2025-02-23 12:10:06 +09:00
googlefan256
5f0c836a66 Update CI.yml 2025-02-23 11:55:59 +09:00
googlefan256
baebe4efd6 Update .env.sample 2025-02-23 11:11:25 +09:00
googlefan256
024751cb71 Update README.md 2025-02-23 11:11:16 +09:00
tuna2134@コマリン親衛隊
12269d9b86 Fix: syntax 2025-02-23 08:20:06 +09:00
tuna2134@コマリン親衛隊
d13b0f6952 Merge pull request #183 from googlefan256/main
LGPL問題の解決
2025-02-22 20:25:12 +09:00
Googlefan
ba40e89411 small fix 2025-02-22 10:50:32 +00:00
googlefan256
52f7a1779b Merge branch 'tuna2134:main' into main 2025-02-22 19:42:26 +09:00
Googlefan
e6932aeeae fix: agpl issue 2025-02-22 10:42:11 +00:00
tuna2134@コマリン親衛隊
881b431e7b Merge pull request #180 from googlefan256/main
細かい修正の集合
2025-02-22 19:23:19 +09:00
Googlefan
755876605e feat: wasm would work 2025-02-22 10:04:40 +00:00
Googlefan
9a5da399be fix 2025-02-22 09:42:19 +00:00
Googlefan
c1c1bbe69a fix 2025-02-22 09:40:47 +00:00
Googlefan
9654ca4781 fix 2025-02-22 09:39:17 +00:00
Googlefan
5fed7e8f41 fix 2025-02-22 09:37:41 +00:00
Googlefan
8ae456b20b fix 2025-02-22 09:35:54 +00:00
Googlefan
908e384cfa fix 2025-02-22 09:29:15 +00:00
Googlefan
932af5cbfb fix 2025-02-22 09:23:08 +00:00
Googlefan
635dacb653 fix 2025-02-22 09:08:49 +00:00
Googlefan
cb884297c7 fix: python ci 2025-02-22 09:01:35 +00:00
Googlefan
b9aa8d1b7c fix 2025-02-22 08:56:10 +00:00
Googlefan
5b364a3c10 fix: build script 2025-02-22 08:26:59 +00:00
Googlefan
a11e57d175 remove unused files 2025-02-22 08:15:35 +00:00
Googlefan
2ddc6e57eb fix: some errors 2025-02-22 08:11:49 +00:00
Googlefan
ddfaf7f28b merge 2025-02-22 08:07:04 +00:00
Googlefan
506ee4d883 refactor 2025-02-22 08:00:17 +00:00
tuna2134@コマリン親衛隊
a1198d6380 CI.yml を更新 2025-01-22 23:18:34 +09:00
tuna2134@コマリン親衛隊
c2430fc794 Update CI.yml 2025-01-20 09:49:47 +09:00
tuna2134@コマリン親衛隊
2bfca72f41 Update CI.yml 2025-01-20 09:40:32 +09:00
tuna2134@コマリン親衛隊
95b84ca55b Update CI.yml 2025-01-20 09:26:59 +09:00
tuna2134@コマリン親衛隊
033dd99fb6 Update CI.yml 2025-01-20 09:21:38 +09:00
tuna2134@コマリン親衛隊
15aef30867 Update CI.yml 2025-01-20 09:10:07 +09:00
tuna2134@コマリン親衛隊
adca252272 Merge pull request #164 from tuna2134/renovate/tokio-1.x-lockfile
fix(deps): update rust crate tokio to v1.43.0
2025-01-20 08:55:49 +09:00
tuna2134@コマリン親衛隊
5c74773754 Merge pull request #166 from tuna2134/renovate/pyo3-0.x-lockfile
fix(deps): update rust crate pyo3 to v0.23.4
2025-01-20 08:55:38 +09:00
tuna2134@コマリン親衛隊
2e5edcfb32 Merge pull request #168 from tuna2134/renovate/serde_json-1.x-lockfile
fix(deps): update rust crate serde_json to v1.0.137
2025-01-20 08:55:25 +09:00
tuna2134@コマリン親衛隊
184080bec8 Merge pull request #169 from tuna2134/renovate/utoipa-monorepo
fix(deps): update rust crate utoipa-scalar to 0.3.0
2025-01-20 08:55:13 +09:00
renovate[bot]
d83fcb9f2c fix(deps): update rust crate utoipa-scalar to 0.3.0 2025-01-19 23:52:40 +00:00
renovate[bot]
7095e0ea89 fix(deps): update rust crate serde_json to v1.0.137 2025-01-19 23:51:54 +00:00
tuna2134@コマリン親衛隊
aa07496a08 Merge pull request #163 from tuna2134/renovate/axum-monorepo
fix(deps): update rust crate axum to 0.8.0
2025-01-20 08:51:27 +09:00
tuna2134@コマリン親衛隊
bf276f51e7 Merge pull request #167 from tuna2134/renovate/log-0.x-lockfile
fix(deps): update rust crate log to v0.4.25
2025-01-20 08:51:14 +09:00
renovate[bot]
5ea2dcff0f fix(deps): update rust crate log to v0.4.25 2025-01-14 12:51:47 +00:00
renovate[bot]
a9ea47dc51 fix(deps): update rust crate pyo3 to v0.23.4 2025-01-11 22:40:29 +00:00
renovate[bot]
69015bdf81 fix(deps): update rust crate tokio to v1.43.0 2025-01-08 18:20:47 +00:00
renovate[bot]
c6e5b73128 fix(deps): update rust crate axum to 0.8.0 2025-01-08 15:24:14 +00:00
78 changed files with 2723 additions and 1234 deletions

View File

@@ -3,4 +3,5 @@ MODEL_PATH=models/tsukuyomi.sbv2
MODELS_PATH=models
TOKENIZER_PATH=models/tokenizer.json
ADDR=localhost:3000
RUST_LOG=warn
RUST_LOG=warn
HOLDER_MAX_LOADED_MODElS=20

11
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
version: 2
updates:
- package-ecosystem: "cargo" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "weekly"

4
.github/workflows/build.Dockerfile vendored Normal file
View File

@@ -0,0 +1,4 @@
FROM ubuntu:latest
RUN apt update && apt install openssl libssl-dev curl pkg-config software-properties-common -y && add-apt-repository ppa:deadsnakes/ppa && apt update && apt install python3.7 python3.8 python3.9 python3.10 python3.11 python3.12 python3.13 python3-pip python3 -y
ENV PIP_BREAK_SYSTEM_PACKAGES=1
RUN mkdir -p /root/.cache/sbv2 && curl https://huggingface.co/neody/sbv2-api-assets/resolve/main/dic/all.bin -o /root/.cache/sbv2/all.bin -L

View File

@@ -1,39 +1,34 @@
# This file is autogenerated by maturin v1.7.1
# To update, run
#
# maturin generate-ci github
#
name: CI
name: Build
on:
push:
branches:
- main
- master
tags:
- '*'
pull_request:
workflow_dispatch:
permissions:
contents: read
contents: write
id-token: write
packages: write
jobs:
linux:
python-linux:
runs-on: ${{ matrix.platform.runner }}
strategy:
matrix:
platform:
- runner: ubuntu-latest
target: x86_64
- runner: ubuntu-latest
- runner: ubuntu-24.04-arm
target: aarch64
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: 3.x
- run: docker build . -f .github/workflows/build.Dockerfile --tag ci
- name: Build wheels
uses: PyO3/maturin-action@v1
with:
@@ -41,14 +36,15 @@ jobs:
args: --release --out dist --find-interpreter
sccache: 'true'
manylinux: auto
working-directory: sbv2_bindings
container: ci
working-directory: ./crates/sbv2_bindings
- name: Upload wheels
uses: actions/upload-artifact@v4
with:
name: wheels-linux-${{ matrix.platform.target }}
path: sbv2_bindings/dist
path: ./crates/sbv2_bindings/dist
windows:
python-windows:
runs-on: ${{ matrix.platform.runner }}
strategy:
matrix:
@@ -67,14 +63,14 @@ jobs:
target: ${{ matrix.platform.target }}
args: --release --out dist --find-interpreter
sccache: 'true'
working-directory: sbv2_bindings
working-directory: ./crates/sbv2_bindings
- name: Upload wheels
uses: actions/upload-artifact@v4
with:
name: wheels-windows-${{ matrix.platform.target }}
path: sbv2_bindings/dist
path: ./crates/sbv2_bindings/dist
macos:
python-macos:
runs-on: ${{ matrix.platform.runner }}
strategy:
matrix:
@@ -92,14 +88,14 @@ jobs:
target: ${{ matrix.platform.target }}
args: --release --out dist --find-interpreter
sccache: 'true'
working-directory: sbv2_bindings
working-directory: ./crates/sbv2_bindings
- name: Upload wheels
uses: actions/upload-artifact@v4
with:
name: wheels-macos-${{ matrix.platform.target }}
path: sbv2_bindings/dist
path: ./crates/sbv2_bindings/dist
sdist:
python-sdist:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
@@ -108,54 +104,117 @@ jobs:
with:
command: sdist
args: --out dist
working-directory: sbv2_bindings
working-directory: ./crates/sbv2_bindings
- name: Upload sdist
uses: actions/upload-artifact@v4
with:
name: wheels-sdist
path: sbv2_bindings/dist
path: ./crates/sbv2_bindings/dist
release:
python-wheel:
name: Wheel Upload
runs-on: ubuntu-latest
needs: [python-linux, python-windows, python-macos, python-sdist]
env:
GH_TOKEN: ${{ github.token }}
steps:
- uses: actions/checkout@v4
- run: gh run download ${{ github.run_id }} -p wheels-*
- name: release
run: |
gh release create commit-${GITHUB_SHA:0:8} --prerelease wheels-*/*
python-release:
name: Release
runs-on: ubuntu-latest
if: "startsWith(github.ref, 'refs/tags/')"
needs: [linux, windows, macos, sdist]
needs: [python-linux, python-windows, python-macos, python-sdist]
environment: release
env:
GH_TOKEN: ${{ github.token }}
steps:
- uses: actions/download-artifact@v4
- uses: actions/checkout@v4
- run: gh run download ${{ github.run_id }} -p wheels-*
- name: Publish to PyPI
uses: PyO3/maturin-action@v1
with:
command: upload
args: --non-interactive --skip-existing wheels-*/*
push-docker:
runs-on: ${{ matrix.platform }}
if: "startsWith(github.ref, 'refs/tags/')"
permissions:
contents: read
packages: write
docker:
runs-on: ${{ matrix.machine.runner }}
strategy:
fail-fast: false
matrix:
machine:
- platform: amd64
runner: ubuntu-latest
- platform: arm64
runner: ubuntu-24.04-arm
tag: [cpu, cuda]
platform: [ubuntu-latest, ubuntu-24.04-arm]
steps:
- uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
- name: Prepare
run: |
platform=${{ matrix.machine.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
ghcr.io/${{ github.repository }}
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push image
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push by digest
id: build
uses: docker/build-push-action@v6
with:
context: .
labels: ${{ steps.meta.outputs.labels }}
file: ./scripts/docker/${{ matrix.tag }}.Dockerfile
push: true
tags: |
ghcr.io/${{ github.repository }}:${{ matrix.tag }}
file: docker/${{ matrix.tag }}.Dockerfile
ghcr.io/${{ github.repository }}:latest-${{ matrix.tag }}-${{ matrix.machine.platform }}
docker-merge:
runs-on: ubuntu-latest
needs:
- docker
steps:
- name: Download digests
uses: actions/download-artifact@v4
with:
path: ${{ runner.temp }}/digests
pattern: digests-*
merge-multiple: true
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Merge
run: |
docker buildx imagetools create -t ghcr.io/${{ github.repository }}:cuda \
ghcr.io/${{ github.repository }}:latest-cuda-amd64 \
ghcr.io/${{ github.repository }}:latest-cuda-arm64
docker buildx imagetools create -t ghcr.io/${{ github.repository }}:cpu \
ghcr.io/${{ github.repository }}:latest-cpu-amd64 \
ghcr.io/${{ github.repository }}:latest-cpu-arm64

26
.github/workflows/lint.yml vendored Normal file
View File

@@ -0,0 +1,26 @@
name: Lint
on:
pull_request:
jobs:
check:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
components:
- rustfmt
- clippy
steps:
- name: Setup
uses: actions/checkout@v4
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: ${{ matrix.components }}
- name: Format
if: ${{ matrix.components == 'rustfmt' }}
run: cargo fmt --all -- --check
- name: Lint
if: ${{ matrix.components == 'clippy' }}
run: cargo clippy --all-targets --all-features -- -D warnings

10
.gitignore vendored
View File

@@ -1,8 +1,10 @@
target
target/
models/
!models/.gitkeep
venv/
.env
output.wav
node_modules
dist/
*.wav
node_modules/
dist/
*.csv
*.bin

2039
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,25 @@
[workspace]
resolver = "2"
members = ["sbv2_api", "sbv2_core", "sbv2_bindings", "sbv2_wasm"]
resolver = "3"
members = ["./crates/sbv2_api", "./crates/sbv2_core", "./crates/sbv2_bindings", "./crates/sbv2_wasm"]
[workspace.package]
version = "0.2.0-alpha6"
edition = "2021"
description = "Style-Bert-VITSの推論ライブラリ"
license = "MIT"
readme = "./README.md"
repository = "https://github.com/neodyland/sbv2-api"
documentation = "https://docs.rs/sbv2_core"
[workspace.dependencies]
anyhow = "1.0.86"
anyhow = "1.0.96"
dotenvy = "0.15.7"
env_logger = "0.11.5"
env_logger = "0.11.6"
ndarray = "0.16.1"
once_cell = "1.19.0"
once_cell = "1.20.3"
[profile.release]
lto = true
debug = false
strip = true
opt-level = "z"
lto = true
codegen-units = 1

View File

@@ -1,6 +1,7 @@
MIT License
Copyright (c) 2024 tuna2134
Copyright (c) 2025- neodyland
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,11 +1,23 @@
# SBV2-API
## 注意:本バージョンはアルファ版です。
安定版を利用したい場合は[こちら](https://github.com/tuna2134/sbv2-api/tree/v0.1.x)をご覧ください
> [!CAUTION]
> 本バージョンはアルファ版です
>
> 安定版を利用したい場合は[こちら](https://github.com/neodyland/sbv2-api/tree/v0.1.x)をご覧ください。
> [!CAUTION]
> オプションの辞書はLGPLです。
>
> オプションの辞書を使用する場合、バイナリの内部の辞書部分について、LGPLが適用されます。
> [!NOTE]
> このレポジトリはメンテナンスの都合上、[tuna2134](https:://github.com/tuna2134)氏の所属する[Neodyland](https://neody.land/)へとリポジトリ所在地を移動しました。
>
> 引き続きtuna2134氏がメインメンテナとして管理しています。
## プログラミングに詳しくない方向け
[こちら](https://github.com/tuna2134/sbv2-gui?tab=readme-ov-file)を参照してください。
[こちら](https://github.com/tuna2134/sbv2-gui)を参照してください。
コマンドやpythonの知識なしで簡単に使えるバージョンです。(できることはほぼ同じ)
@@ -17,7 +29,7 @@ JP-Extra しか対応していません。(基本的に対応する予定もあ
## 変換方法
[こちら](https://github.com/tuna2134/sbv2-api/tree/main/convert)を参照してください。
[こちら](https://github.com/neodyland/sbv2-api/tree/main/scripts/convert)を参照してください。
## Todo
@@ -29,23 +41,27 @@ JP-Extra しか対応していません。(基本的に対応する予定もあ
- [x] GPU 対応(CUDA)
- [x] GPU 対応(DirectML)
- [x] GPU 対応(CoreML)
- [ ] WASM 変換(依存ライブラリの関係により現在は不可)
- [x] WASM 変換
- [x] arm64のdockerサポート
- [x] aivis形式のサポート
- [ ] MeCabを利用する
## 構造説明
- `sbv2_api` - 推論用 REST API
- `sbv2_core` - 推論コア部分
- `docker` - docker ビルドスクリプト
- `convert` - onnx, sbv2フォーマットへの変換スクリプト
- `crates/sbv2_api` - 推論用 REST API
- `crates/sbv2_core` - 推論コア部分
- `scripts/docker` - docker ビルドスクリプト
- `scripts/convert` - onnx, sbv2フォーマットへの変換スクリプト
## プログラミングある程度できる人向けREST API起動方法
### models をインストール
https://huggingface.co/googlefan/sbv2_onnx_models/tree/main
`tokenizer.json`,`debert.onnx`,`tsukuyomi.sbv2`を models フォルダに配置
https://huggingface.co/neody/sbv2-api-assets/tree/main/deberta
から`tokenizer.json`,`debert.onnx`
https://huggingface.co/neody/sbv2-api-assets/tree/main/model
から`tsukuyomi.sbv2`
を models フォルダに配置
### .env ファイルの作成
@@ -59,7 +75,7 @@ CPUの場合は
```sh
docker run -it --rm -p 3000:3000 --name sbv2 \
-v ./models:/work/models --env-file .env \
ghcr.io/tuna2134/sbv2-api:cpu
ghcr.io/neodyland/sbv2-api:cpu
```
<details>
@@ -74,7 +90,7 @@ CPUの場合は
```bash
docker run --platform linux/amd64 -it --rm -p 3000:3000 --name sbv2 \
-v ./models:/work/models --env-file .env \
ghcr.io/tuna2134/sbv2-api:cpu
ghcr.io/neodyland/sbv2-api:cpu
```
</details>
@@ -83,7 +99,7 @@ CUDAの場合は
docker run -it --rm -p 3000:3000 --name sbv2 \
-v ./models:/work/models --env-file .env \
--gpus all \
ghcr.io/tuna2134/sbv2-api:cuda
ghcr.io/neodyland/sbv2-api:cuda
```
### 起動確認
@@ -114,8 +130,10 @@ curl http://localhost:3000/models
- `ADDR` `localhost:3000`などのようにサーバー起動アドレスをコントロールできます。
- `MODELS_PATH` sbv2モデルの存在するフォルダを指定できます。
- `RUST_LOG` おなじみlog levelです。
- `HOLDER_MAX_LOADED_MODElS` RAMにロードされるモデルの最大数を指定します。
## 謝辞
- [litagin02/Style-Bert-VITS2](https://github.com/litagin02/Style-Bert-VITS2) - このコード書くにあたり、ベースとなる部分を参考にさせていただきました。
- [litagin02/Style-Bert-VITS2](https://github.com/litagin02/Style-Bert-VITS2) - このコード書くにあたり、ベースとなる部分を参考にさせていただきました。
- [Googlefan](https://github.com/Googlefan256) - 彼にモデルを ONNX ヘ変換および効率化をする方法を教わりました。
- [Aivis Project](https://github.com/Aivis-Project/AivisSpeech-Engine) - 辞書部分

View File

@@ -1,14 +0,0 @@
悪徳貴族として名高いヴェレット家の長男――オウガ・ヴェレットは転生者である。
ブラック企業に勤め、過労死した彼には一つの夢があった。
「可愛いハーレム作って、美味い物を食べる。領民の税金で楽して好き放題な生活を送ってみせる!」
素晴らしき異世界ライフを夢見た彼は実現へ向けて、努力を始めた。
ハーレムを築くためにいじめられてる平民の子を助けて恩を売ってやったり。
労働力を手に入れるために多くの孤児を雇って教育したり。
反乱を起きても鎮圧できるように魔法学院へ通って魔法を極める。
「クックック……! 順調、順調! 未来は明るいなぁ!」
――オウガはまだ知らない。
楽な生活を送るためにしてきたことが評価され、世間から『聖者』様として呼ばれる未来を。

View File

@@ -1,5 +0,0 @@
style-bert-vits2
onnxsim
numpy<2
zstandard
onnxruntime

View File

@@ -1,19 +1,24 @@
[package]
name = "sbv2_api"
version = "0.2.0-alpha4"
edition = "2021"
version.workspace = true
edition.workspace = true
description.workspace = true
readme.workspace = true
repository.workspace = true
documentation.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
axum = "0.7.5"
axum = "0.8.0"
dotenvy.workspace = true
env_logger.workspace = true
log = "0.4.22"
sbv2_core = { version = "0.2.0-alpha2", path = "../sbv2_core", features = ["aivmx"] }
sbv2_core = { version = "0.2.0-alpha6", path = "../sbv2_core", features = ["aivmx"] }
serde = { version = "1.0.210", features = ["derive"] }
tokio = { version = "1.40.0", features = ["full"] }
tokio = { version = "1.45.1", features = ["full"] }
utoipa = { version = "5.0.0", features = ["axum_extras"] }
utoipa-scalar = { version = "0.2.0", features = ["axum"] }
utoipa-scalar = { version = "0.3.0", features = ["axum"] }
[features]
coreml = ["sbv2_core/coreml"]

View File

@@ -53,12 +53,16 @@ struct SynthesizeRequest {
text: String,
ident: String,
#[serde(default = "sdp_default")]
#[schema(example = 0.0_f32)]
sdp_ratio: f32,
#[serde(default = "length_default")]
#[schema(example = 1.0_f32)]
length_scale: f32,
#[serde(default = "style_id_default")]
#[schema(example = 0_i32)]
style_id: i32,
#[serde(default = "speaker_id_default")]
#[schema(example = 0_i64)]
speaker_id: i64,
}

View File

@@ -0,0 +1,24 @@
[package]
name = "sbv2_bindings"
version.workspace = true
edition.workspace = true
description.workspace = true
readme.workspace = true
repository.workspace = true
documentation.workspace = true
license.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[lib]
name = "sbv2_bindings"
crate-type = ["cdylib"]
[dependencies]
anyhow.workspace = true
ndarray.workspace = true
pyo3 = { version = "0.25.0", features = ["anyhow"] }
sbv2_core = { path = "../sbv2_core", features = ["std"], default-features = false }
[features]
agpl_dict = ["sbv2_core/agpl_dict"]
default = ["agpl_dict"]

View File

@@ -107,7 +107,7 @@ impl TTSModel {
/// style_vector : StyleVector
/// スタイルベクトル
fn get_style_vector(
&self,
&mut self,
ident: String,
style_id: i32,
weight: f32,
@@ -136,6 +136,7 @@ impl TTSModel {
/// -------
/// voice_data : bytes
/// 音声データ
#[allow(clippy::too_many_arguments)]
fn synthesize<'p>(
&'p mut self,
py: Python<'p>,
@@ -145,7 +146,7 @@ impl TTSModel {
speaker_id: i64,
sdp_ratio: f32,
length_scale: f32,
) -> anyhow::Result<Bound<PyBytes>> {
) -> anyhow::Result<Bound<'p, PyBytes>> {
let data = self.model.easy_synthesize(
ident.as_str(),
&text,

View File

@@ -1,12 +1,12 @@
[package]
name = "sbv2_core"
description = "Style-Bert-VITSの推論ライブラリ"
version = "0.2.0-alpha4"
edition = "2021"
license = "MIT"
readme = "../README.md"
repository = "https://github.com/tuna2134/sbv2-api"
documentation = "https://docs.rs/sbv2_core"
version.workspace = true
edition.workspace = true
description.workspace = true
readme.workspace = true
repository.workspace = true
documentation.workspace = true
license.workspace = true
[dependencies]
anyhow.workspace = true
@@ -14,29 +14,34 @@ base64 = { version = "0.22.1", optional = true }
dotenvy.workspace = true
env_logger.workspace = true
hound = "3.5.1"
jpreprocess = { version = "0.10.0", features = ["naist-jdic"] }
jpreprocess = { version = "0.12.0", features = ["naist-jdic"] }
ndarray.workspace = true
npyz = { version = "0.8.3", optional = true }
npyz = { version = "0.8.4", optional = true }
num_cpus = "1.16.0"
once_cell.workspace = true
ort = { git = "https://github.com/pykeio/ort.git", version = "2.0.0-rc.8", optional = true }
ort = { git = "https://github.com/pykeio/ort.git", version = "2.0.0-rc.9", optional = true }
regex = "1.10.6"
serde = { version = "1.0.210", features = ["derive"] }
serde_json = "1.0.128"
tar = "0.4.41"
thiserror = "1.0.63"
thiserror = "2.0.11"
tokenizers = { version = "0.21.0", default-features = false }
zstd = "0.13.2"
[features]
cuda = ["ort/cuda", "std"]
cuda_tf32 = ["std", "cuda"]
agpl_dict = []
std = ["dep:ort", "tokenizers/progressbar", "tokenizers/onig", "tokenizers/esaxx_fast"]
dynamic = ["ort/load-dynamic", "std"]
directml = ["ort/directml", "std"]
tensorrt = ["ort/tensorrt", "std"]
coreml = ["ort/coreml", "std"]
default = ["std"]
default = ["std", "agpl_dict"]
no_std = ["tokenizers/unstable_wasm"]
aivmx = ["npyz", "base64"]
base64 = ["dep:base64"]
[build-dependencies]
dirs = "6.0.0"
ureq = "3.0.6"

31
crates/sbv2_core/build.rs Normal file
View File

@@ -0,0 +1,31 @@
use dirs::home_dir;
use std::env;
use std::fs;
use std::io::copy;
use std::path::PathBuf;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let static_dir = home_dir().unwrap().join(".cache/sbv2");
let static_path = static_dir.join("all.bin");
let out_path = PathBuf::from(&env::var("OUT_DIR").unwrap()).join("all.bin");
println!("cargo:rerun-if-changed=build.rs");
if static_path.exists() {
println!("cargo:info=Dictionary file already exists, skipping download.");
} else {
println!("cargo:warning=Downloading dictionary file...");
let mut response =
ureq::get("https://huggingface.co/neody/sbv2-api-assets/resolve/main/dic/all.bin")
.call()?;
let mut response = response.body_mut().as_reader();
if !static_dir.exists() {
fs::create_dir_all(static_dir)?;
}
let mut file = fs::File::create(&static_path)?;
copy(&mut response, &mut file)?;
}
if !out_path.exists() && fs::hard_link(&static_path, &out_path).is_err() {
println!("cargo:warning=Failed to create hard link, copying instead.");
fs::copy(static_path, out_path)?;
}
Ok(())
}

View File

@@ -0,0 +1,22 @@
use crate::error::Result;
use ndarray::{Array2, Ix2};
use ort::session::Session;
use ort::value::TensorRef;
pub fn predict(
session: &mut Session,
token_ids: Vec<i64>,
attention_masks: Vec<i64>,
) -> Result<Array2<f32>> {
let outputs = session.run(
ort::inputs! {
"input_ids" => TensorRef::from_array_view((vec![1, token_ids.len() as i64], token_ids.as_slice()))?,
"attention_mask" => TensorRef::from_array_view((vec![1, attention_masks.len() as i64], attention_masks.as_slice()))?,
}
)?;
let output = outputs["output"]
.try_extract_array::<f32>()?
.into_dimensionality::<Ix2>()?
.to_owned();
Ok(output)
}

View File

@@ -6,6 +6,8 @@ pub enum Error {
TokenizerError(#[from] tokenizers::Error),
#[error("JPreprocess error: {0}")]
JPreprocessError(#[from] jpreprocess::error::JPreprocessError),
#[error("Lindera error: {0}")]
LinderaError(String),
#[cfg(feature = "std")]
#[error("ONNX error: {0}")]
OrtError(#[from] ort::Error),
@@ -26,6 +28,8 @@ pub enum Error {
Base64Error(#[from] base64::DecodeError),
#[error("other")]
OtherError(String),
#[error("Style error: {0}")]
StyleError(String),
}
pub type Result<T> = std::result::Result<T, Error>;

View File

@@ -1,21 +1,32 @@
use crate::error::{Error, Result};
use crate::mora::{MORA_KATA_TO_MORA_PHONEMES, VOWELS};
use crate::mora::{CONSONANTS, MORA_KATA_TO_MORA_PHONEMES, MORA_PHONEMES_TO_MORA_KATA, VOWELS};
use crate::norm::{replace_punctuation, PUNCTUATIONS};
use jpreprocess::*;
use jpreprocess::{kind, DefaultTokenizer, JPreprocess, SystemDictionaryConfig, UserDictionary};
use once_cell::sync::Lazy;
use regex::Regex;
use std::cmp::Reverse;
use std::collections::HashSet;
use std::sync::Arc;
type JPreprocessType = JPreprocess<DefaultFetcher>;
type JPreprocessType = JPreprocess<DefaultTokenizer>;
#[cfg(feature = "agpl_dict")]
fn agpl_dict() -> Result<Option<UserDictionary>> {
Ok(Some(
UserDictionary::load(include_bytes!(concat!(env!("OUT_DIR"), "/all.bin")))
.map_err(|e| Error::LinderaError(e.to_string()))?,
))
}
#[cfg(not(feature = "agpl_dict"))]
fn agpl_dict() -> Result<Option<UserDictionary>> {
Ok(None)
}
fn initialize_jtalk() -> Result<JPreprocessType> {
let config = JPreprocessConfig {
dictionary: SystemDictionaryConfig::Bundled(kind::JPreprocessDictionaryKind::NaistJdic),
user_dictionary: None,
};
let jpreprocess = JPreprocess::from_config(config)?;
let sdic =
SystemDictionaryConfig::Bundled(kind::JPreprocessDictionaryKind::NaistJdic).load()?;
let jpreprocess = JPreprocess::with_dictionaries(sdic, agpl_dict()?);
Ok(jpreprocess)
}
@@ -65,6 +76,34 @@ static MORA_PATTERN: Lazy<Vec<String>> = Lazy::new(|| {
});
static LONG_PATTERN: Lazy<Regex> = Lazy::new(|| Regex::new(r"(\w)(ー*)").unwrap());
fn phone_tone_to_kana(phones: Vec<String>, tones: Vec<i32>) -> Vec<(String, i32)> {
let phones = &phones[1..];
let tones = &tones[1..];
let mut results = Vec::new();
let mut current_mora = String::new();
for ((phone, _next_phone), (&tone, &next_tone)) in phones
.iter()
.zip(phones.iter().skip(1))
.zip(tones.iter().zip(tones.iter().skip(1)))
{
if PUNCTUATIONS.contains(&phone.clone().as_str()) {
results.push((phone.to_string(), tone));
continue;
}
if CONSONANTS.contains(&phone.clone()) {
assert_eq!(current_mora, "");
assert_eq!(tone, next_tone);
current_mora = phone.to_string()
} else {
current_mora += phone;
let kana = MORA_PHONEMES_TO_MORA_KATA.get(&current_mora).unwrap();
results.push((kana.to_string(), tone));
current_mora = String::new();
}
}
results
}
pub struct JTalkProcess {
jpreprocess: Arc<JPreprocessType>,
parsed: Vec<String>,
@@ -154,6 +193,11 @@ impl JTalkProcess {
Ok((phones, tones, new_word2ph))
}
pub fn g2kana_tone(&self) -> Result<Vec<(String, i32)>> {
let (phones, tones, _) = self.g2p()?;
Ok(phone_tone_to_kana(phones, tones))
}
fn distribute_phone(n_phone: i32, n_word: i32) -> Vec<i32> {
let mut phones_per_word = vec![0; n_word as usize];
for _ in 0..n_phone {

View File

@@ -6,7 +6,7 @@ fn main_inner() -> anyhow::Result<()> {
use sbv2_core::tts;
dotenvy::dotenv_override().ok();
env_logger::init();
let text = fs::read_to_string("content.txt")?;
let text = "今日の天気は快晴です。";
let ident = "aaa";
let mut tts_holder = tts::TTSModelHolder::new(
&fs::read(env::var("BERT_MODEL_PATH")?)?,
@@ -15,17 +15,22 @@ fn main_inner() -> anyhow::Result<()> {
.ok()
.and_then(|x| x.parse().ok()),
)?;
let mp = env::var("MODEL_PATH")?;
let b = fs::read(&mp)?;
#[cfg(not(feature = "aivmx"))]
{
tts_holder.load_sbv2file(ident, fs::read(env::var("MODEL_PATH")?)?)?;
tts_holder.load_sbv2file(ident, b)?;
}
#[cfg(feature = "aivmx")]
{
tts_holder.load_aivmx(ident, fs::read(env::var("MODEL_PATH")?)?)?;
if mp.ends_with(".sbv2") {
tts_holder.load_sbv2file(ident, b)?;
} else {
tts_holder.load_aivmx(ident, b)?;
}
}
let audio =
tts_holder.easy_synthesize(ident, &text, 0, 0, tts::SynthesizeOptions::default())?;
let audio = tts_holder.easy_synthesize(ident, text, 0, 0, tts::SynthesizeOptions::default())?;
fs::write("output.wav", audio)?;
Ok(())

View File

@@ -0,0 +1,106 @@
use crate::error::Result;
use ndarray::{array, Array1, Array2, Array3, Axis, Ix3};
use ort::session::{builder::GraphOptimizationLevel, Session};
#[allow(clippy::vec_init_then_push, unused_variables)]
pub fn load_model<P: AsRef<[u8]>>(model_file: P, bert: bool) -> Result<Session> {
let mut exp = Vec::new();
#[cfg(feature = "tensorrt")]
{
if bert {
exp.push(
ort::execution_providers::TensorRTExecutionProvider::default()
.with_fp16(true)
.with_profile_min_shapes("input_ids:1x1,attention_mask:1x1")
.with_profile_max_shapes("input_ids:1x100,attention_mask:1x100")
.with_profile_opt_shapes("input_ids:1x25,attention_mask:1x25")
.build(),
);
}
}
#[cfg(feature = "cuda")]
{
#[allow(unused_mut)]
let mut cuda = ort::execution_providers::CUDAExecutionProvider::default();
#[cfg(feature = "cuda_tf32")]
{
cuda = cuda.with_tf32(true);
}
exp.push(cuda.build());
}
#[cfg(feature = "directml")]
{
exp.push(ort::execution_providers::DirectMLExecutionProvider::default().build());
}
#[cfg(feature = "coreml")]
{
exp.push(ort::execution_providers::CoreMLExecutionProvider::default().build());
}
exp.push(ort::execution_providers::CPUExecutionProvider::default().build());
Ok(Session::builder()?
.with_execution_providers(exp)?
.with_optimization_level(GraphOptimizationLevel::Level3)?
.with_intra_threads(num_cpus::get_physical())?
.with_parallel_execution(true)?
.with_inter_threads(num_cpus::get_physical())?
.commit_from_memory(model_file.as_ref())?)
}
#[allow(clippy::too_many_arguments)]
pub fn synthesize(
session: &mut Session,
bert_ori: Array2<f32>,
x_tst: Array1<i64>,
mut spk_ids: Array1<i64>,
tones: Array1<i64>,
lang_ids: Array1<i64>,
style_vector: Array1<f32>,
sdp_ratio: f32,
length_scale: f32,
noise_scale: f32,
noise_scale_w: f32,
) -> Result<Array3<f32>> {
let bert_ori = bert_ori.insert_axis(Axis(0));
let bert_ori = bert_ori.as_standard_layout();
let bert = ort::value::TensorRef::from_array_view(&bert_ori)?;
let mut x_tst_lengths = array![x_tst.shape()[0] as i64];
let x_tst_lengths = ort::value::TensorRef::from_array_view(&mut x_tst_lengths)?;
let mut x_tst = x_tst.insert_axis(Axis(0));
let x_tst = ort::value::TensorRef::from_array_view(&mut x_tst)?;
let mut lang_ids = lang_ids.insert_axis(Axis(0));
let lang_ids = ort::value::TensorRef::from_array_view(&mut lang_ids)?;
let mut tones = tones.insert_axis(Axis(0));
let tones = ort::value::TensorRef::from_array_view(&mut tones)?;
let mut style_vector = style_vector.insert_axis(Axis(0));
let style_vector = ort::value::TensorRef::from_array_view(&mut style_vector)?;
let sid = ort::value::TensorRef::from_array_view(&mut spk_ids)?;
let sdp_ratio = vec![sdp_ratio];
let sdp_ratio = ort::value::TensorRef::from_array_view((vec![1_i64], sdp_ratio.as_slice()))?;
let length_scale = vec![length_scale];
let length_scale =
ort::value::TensorRef::from_array_view((vec![1_i64], length_scale.as_slice()))?;
let noise_scale = vec![noise_scale];
let noise_scale =
ort::value::TensorRef::from_array_view((vec![1_i64], noise_scale.as_slice()))?;
let noise_scale_w = vec![noise_scale_w];
let noise_scale_w =
ort::value::TensorRef::from_array_view((vec![1_i64], noise_scale_w.as_slice()))?;
let outputs = session.run(ort::inputs! {
"x_tst" => x_tst,
"x_tst_lengths" => x_tst_lengths,
"sid" => sid,
"tones" => tones,
"language" => lang_ids,
"bert" => bert,
"style_vec" => style_vector,
"sdp_ratio" => sdp_ratio,
"length_scale" => length_scale,
"noise_scale" => noise_scale,
"noise_scale_w" => noise_scale_w,
})?;
let audio_array = outputs["output"]
.try_extract_array::<f32>()?
.into_dimensionality::<Ix3>()?
.to_owned();
Ok(audio_array)
}

View File

@@ -25,6 +25,21 @@ static MORA_LIST_ADDITIONAL: Lazy<Vec<Mora>> = Lazy::new(|| {
data.additional
});
pub static MORA_PHONEMES_TO_MORA_KATA: Lazy<HashMap<String, String>> = Lazy::new(|| {
let mut map = HashMap::new();
for mora in MORA_LIST_MINIMUM.iter() {
map.insert(
format!(
"{}{}",
mora.consonant.clone().unwrap_or("".to_string()),
mora.vowel
),
mora.mora.clone(),
);
}
map
});
pub static MORA_KATA_TO_MORA_PHONEMES: Lazy<HashMap<String, (Option<String>, String)>> =
Lazy::new(|| {
let mut map = HashMap::new();
@@ -37,4 +52,12 @@ pub static MORA_KATA_TO_MORA_PHONEMES: Lazy<HashMap<String, (Option<String>, Str
map
});
pub static CONSONANTS: Lazy<Vec<String>> = Lazy::new(|| {
let consonants = MORA_KATA_TO_MORA_PHONEMES
.values()
.filter_map(|(consonant, _)| consonant.clone())
.collect::<Vec<_>>();
consonants
});
pub const VOWELS: [&str; 6] = ["a", "i", "u", "e", "o", "N"];

View File

@@ -1,4 +1,4 @@
use crate::error::Result;
use crate::error::{Error, Result};
use ndarray::{s, Array1, Array2};
use serde::Deserialize;
@@ -21,6 +21,18 @@ pub fn get_style_vector(
style_id: i32,
weight: f32,
) -> Result<Array1<f32>> {
if style_vectors.shape().len() != 2 {
return Err(Error::StyleError(
"Invalid shape for style vectors".to_string(),
));
}
if style_id < 0 || style_id >= style_vectors.shape()[0] as i32 {
return Err(Error::StyleError(format!(
"Invalid style ID: {}. Max ID: {}",
style_id,
style_vectors.shape()[0] - 1
)));
}
let mean = style_vectors.slice(s![0, ..]).to_owned();
let style_vector = style_vectors.slice(s![style_id as usize, ..]).to_owned();
let diff = (style_vector - &mean) * weight;

View File

@@ -5,7 +5,7 @@ use base64::prelude::{Engine as _, BASE64_STANDARD};
#[cfg(feature = "aivmx")]
use ndarray::ShapeBuilder;
use ndarray::{concatenate, Array1, Array2, Array3, Axis};
use ort::Session;
use ort::session::Session;
#[cfg(feature = "aivmx")]
use std::io::Cursor;
use tokenizers::Tokenizer;
@@ -41,7 +41,7 @@ pub struct TTSModelHolder {
tokenizer: Tokenizer,
bert: Session,
models: Vec<TTSModel>,
jtalk: jtalk::JTalk,
pub jtalk: jtalk::JTalk,
max_loaded_models: Option<usize>,
}
@@ -200,23 +200,41 @@ impl TTSModelHolder {
/// This function is for low-level usage, use `easy_synthesize` for high-level usage.
#[allow(clippy::type_complexity)]
pub fn parse_text(
&self,
&mut self,
text: &str,
) -> Result<(Array2<f32>, Array1<i64>, Array1<i64>, Array1<i64>)> {
crate::tts_util::parse_text_blocking(
text,
None,
&self.jtalk,
&self.tokenizer,
|token_ids, attention_masks| {
crate::bert::predict(&self.bert, token_ids, attention_masks)
crate::bert::predict(&mut self.bert, token_ids, attention_masks)
},
)
}
fn find_model<I: Into<TTSIdent>>(&self, ident: I) -> Result<&TTSModel> {
#[allow(clippy::type_complexity)]
pub fn parse_text_neo(
&mut self,
text: String,
given_tones: Option<Vec<i32>>,
) -> Result<(Array2<f32>, Array1<i64>, Array1<i64>, Array1<i64>)> {
crate::tts_util::parse_text_blocking(
&text,
given_tones,
&self.jtalk,
&self.tokenizer,
|token_ids, attention_masks| {
crate::bert::predict(&mut self.bert, token_ids, attention_masks)
},
)
}
fn find_model<I: Into<TTSIdent>>(&mut self, ident: I) -> Result<&mut TTSModel> {
let ident = ident.into();
self.models
.iter()
.iter_mut()
.find(|m| m.ident == ident)
.ok_or(Error::ModelNotFoundError(ident.to_string()))
}
@@ -262,7 +280,7 @@ impl TTSModelHolder {
/// # Note
/// This function is for low-level usage, use `easy_synthesize` for high-level usage.
pub fn get_style_vector<I: Into<TTSIdent>>(
&self,
&mut self,
ident: I,
style_id: i32,
weight: f32,
@@ -286,11 +304,6 @@ impl TTSModelHolder {
options: SynthesizeOptions,
) -> Result<Vec<u8>> {
self.find_and_load_model(ident)?;
let vits2 = &self
.find_model(ident)?
.vits2
.as_ref()
.ok_or(Error::ModelNotFoundError(ident.into().to_string()))?;
let style_vector = self.get_style_vector(ident, style_id, options.style_weight)?;
let audio_array = if options.split_sentences {
let texts: Vec<&str> = text.split('\n').collect();
@@ -300,6 +313,12 @@ impl TTSModelHolder {
continue;
}
let (bert_ori, phones, tones, lang_ids) = self.parse_text(t)?;
let vits2 = self
.find_model(ident)?
.vits2
.as_mut()
.ok_or(Error::ModelNotFoundError(ident.into().to_string()))?;
let audio = model::synthesize(
vits2,
bert_ori.to_owned(),
@@ -324,6 +343,85 @@ impl TTSModelHolder {
)?
} else {
let (bert_ori, phones, tones, lang_ids) = self.parse_text(text)?;
let vits2 = self
.find_model(ident)?
.vits2
.as_mut()
.ok_or(Error::ModelNotFoundError(ident.into().to_string()))?;
model::synthesize(
vits2,
bert_ori.to_owned(),
phones,
Array1::from_vec(vec![speaker_id]),
tones,
lang_ids,
style_vector,
options.sdp_ratio,
options.length_scale,
0.677,
0.8,
)?
};
tts_util::array_to_vec(audio_array)
}
pub fn easy_synthesize_neo<I: Into<TTSIdent> + Copy>(
&mut self,
ident: I,
text: &str,
given_tones: Option<Vec<i32>>,
style_id: i32,
speaker_id: i64,
options: SynthesizeOptions,
) -> Result<Vec<u8>> {
self.find_and_load_model(ident)?;
let style_vector = self.get_style_vector(ident, style_id, options.style_weight)?;
let audio_array = if options.split_sentences {
let texts: Vec<&str> = text.split('\n').collect();
let mut audios = vec![];
for (i, t) in texts.iter().enumerate() {
if t.is_empty() {
continue;
}
let (bert_ori, phones, tones, lang_ids) =
self.parse_text_neo(t.to_string(), given_tones.clone())?;
let vits2 = self
.find_model(ident)?
.vits2
.as_mut()
.ok_or(Error::ModelNotFoundError(ident.into().to_string()))?;
let audio = model::synthesize(
vits2,
bert_ori.to_owned(),
phones,
Array1::from_vec(vec![speaker_id]),
tones,
lang_ids,
style_vector.clone(),
options.sdp_ratio,
options.length_scale,
0.677,
0.8,
)?;
audios.push(audio.clone());
if i != texts.len() - 1 {
audios.push(Array3::zeros((1, 1, 22050)));
}
}
concatenate(
Axis(2),
&audios.iter().map(|x| x.view()).collect::<Vec<_>>(),
)?
} else {
let (bert_ori, phones, tones, lang_ids) = self.parse_text(text)?;
let vits2 = self
.find_model(ident)?
.vits2
.as_mut()
.ok_or(Error::ModelNotFoundError(ident.into().to_string()))?;
model::synthesize(
vits2,
bert_ori.to_owned(),

View File

@@ -1,10 +1,22 @@
use std::io::Cursor;
use crate::error::Result;
use crate::jtalk::JTalkProcess;
use crate::mora::MORA_KATA_TO_MORA_PHONEMES;
use crate::norm::PUNCTUATIONS;
use crate::{jtalk, nlp, norm, tokenizer, utils};
use hound::{SampleFormat, WavSpec, WavWriter};
use ndarray::{concatenate, s, Array, Array1, Array2, Array3, Axis};
use tokenizers::Tokenizer;
pub fn preprocess_parse_text(text: &str, jtalk: &jtalk::JTalk) -> Result<(String, JTalkProcess)> {
let text = jtalk.num2word(text)?;
let normalized_text = norm::normalize_text(&text);
let process = jtalk.process_text(&normalized_text)?;
Ok((normalized_text, process))
}
/// Parse text and return the input for synthesize
///
/// # Note
@@ -21,13 +33,9 @@ pub async fn parse_text(
Box<dyn std::future::Future<Output = Result<ndarray::Array2<f32>>>>,
>,
) -> Result<(Array2<f32>, Array1<i64>, Array1<i64>, Array1<i64>)> {
let text = jtalk.num2word(text)?;
let normalized_text = norm::normalize_text(&text);
let process = jtalk.process_text(&normalized_text)?;
let (normalized_text, process) = preprocess_parse_text(text, jtalk)?;
let (phones, tones, mut word2ph) = process.g2p()?;
let (phones, tones, lang_ids) = nlp::cleaned_text_to_sequence(phones, tones);
let phones = utils::intersperse(&phones, 0);
let tones = utils::intersperse(&tones, 0);
let lang_ids = utils::intersperse(&lang_ids, 0);
@@ -92,6 +100,7 @@ pub async fn parse_text(
#[allow(clippy::type_complexity)]
pub fn parse_text_blocking(
text: &str,
given_tones: Option<Vec<i32>>,
jtalk: &jtalk::JTalk,
tokenizer: &Tokenizer,
bert_predict: impl FnOnce(Vec<i64>, Vec<i64>) -> Result<ndarray::Array2<f32>>,
@@ -100,7 +109,10 @@ pub fn parse_text_blocking(
let normalized_text = norm::normalize_text(&text);
let process = jtalk.process_text(&normalized_text)?;
let (phones, tones, mut word2ph) = process.g2p()?;
let (phones, mut tones, mut word2ph) = process.g2p()?;
if let Some(given_tones) = given_tones {
tones = given_tones;
}
let (phones, tones, lang_ids) = nlp::cleaned_text_to_sequence(phones, tones);
let phones = utils::intersperse(&phones, 0);
@@ -178,3 +190,23 @@ pub fn array_to_vec(audio_array: Array3<f32>) -> Result<Vec<u8>> {
writer.finalize()?;
Ok(cursor.into_inner())
}
pub fn kata_tone2phone_tone(kata_tone: Vec<(String, i32)>) -> Vec<(String, i32)> {
let mut results = vec![("_".to_string(), 0)];
for (mora, tone) in kata_tone {
if PUNCTUATIONS.contains(&mora.as_str()) {
results.push((mora, 0));
continue;
} else {
let (consonant, vowel) = MORA_KATA_TO_MORA_PHONEMES.get(&mora).unwrap();
if let Some(consonant) = consonant {
results.push((consonant.to_string(), tone));
results.push((vowel.to_string(), tone));
} else {
results.push((vowel.to_string(), tone));
}
}
}
results.push(("_".to_string(), 0));
results
}

View File

@@ -0,0 +1,19 @@
[package]
name = "sbv2_editor"
version.workspace = true
edition.workspace = true
description.workspace = true
license.workspace = true
readme.workspace = true
repository.workspace = true
documentation.workspace = true
[dependencies]
anyhow.workspace = true
axum = "0.8.1"
dotenvy.workspace = true
env_logger.workspace = true
log = "0.4.27"
sbv2_core = { version = "0.2.0-alpha6", path = "../sbv2_core", features = ["aivmx"] }
serde = { version = "1.0.219", features = ["derive"] }
tokio = { version = "1.44.1", features = ["full"] }

View File

@@ -0,0 +1,2 @@
# sbv2-voicevox
sbv2-apiをvoicevox化します。

View File

@@ -0,0 +1,226 @@
{
"accent_phrases": [
{
"moras": [
{
"text": "コ",
"consonant": "k",
"consonant_length": 0.10002632439136505,
"vowel": "o",
"vowel_length": 0.15740256011486053,
"pitch": 5.749961853027344
},
{
"text": "ン",
"consonant": null,
"consonant_length": null,
"vowel": "N",
"vowel_length": 0.08265873789787292,
"pitch": 5.89122200012207
},
{
"text": "ニ",
"consonant": "n",
"consonant_length": 0.03657080978155136,
"vowel": "i",
"vowel_length": 0.1175866425037384,
"pitch": 5.969866752624512
},
{
"text": "チ",
"consonant": "ch",
"consonant_length": 0.09005842357873917,
"vowel": "i",
"vowel_length": 0.08666137605905533,
"pitch": 5.958892822265625
},
{
"text": "ワ",
"consonant": "w",
"consonant_length": 0.07833231985569,
"vowel": "a",
"vowel_length": 0.21250136196613312,
"pitch": 5.949411392211914
}
],
"accent": 5,
"pause_mora": {
"text": "、",
"consonant": null,
"consonant_length": null,
"vowel": "pau",
"vowel_length": 0.4723339378833771,
"pitch": 0.0
},
"is_interrogative": false
},
{
"moras": [
{
"text": "オ",
"consonant": null,
"consonant_length": null,
"vowel": "o",
"vowel_length": 0.22004225850105286,
"pitch": 5.6870927810668945
},
{
"text": "ン",
"consonant": null,
"consonant_length": null,
"vowel": "N",
"vowel_length": 0.09161105751991272,
"pitch": 5.93472957611084
},
{
"text": "セ",
"consonant": "s",
"consonant_length": 0.08924821764230728,
"vowel": "e",
"vowel_length": 0.14142127335071564,
"pitch": 6.121850490570068
},
{
"text": "エ",
"consonant": null,
"consonant_length": null,
"vowel": "e",
"vowel_length": 0.10636933892965317,
"pitch": 6.157896041870117
},
{
"text": "ゴ",
"consonant": "g",
"consonant_length": 0.07600915431976318,
"vowel": "o",
"vowel_length": 0.09598273783922195,
"pitch": 6.188933849334717
},
{
"text": "オ",
"consonant": null,
"consonant_length": null,
"vowel": "o",
"vowel_length": 0.1079121008515358,
"pitch": 6.235202789306641
},
{
"text": "セ",
"consonant": "s",
"consonant_length": 0.09591838717460632,
"vowel": "e",
"vowel_length": 0.10286372154951096,
"pitch": 6.153214454650879
},
{
"text": "エ",
"consonant": null,
"consonant_length": null,
"vowel": "e",
"vowel_length": 0.08992656320333481,
"pitch": 6.02571439743042
},
{
"text": "",
"consonant": "n",
"consonant_length": 0.05660202354192734,
"vowel": "o",
"vowel_length": 0.09676017612218857,
"pitch": 5.711844444274902
}
],
"accent": 5,
"pause_mora": null,
"is_interrogative": false
},
{
"moras": [
{
"text": "セ",
"consonant": "s",
"consonant_length": 0.07805486768484116,
"vowel": "e",
"vowel_length": 0.09617523103952408,
"pitch": 5.774399280548096
},
{
"text": "カ",
"consonant": "k",
"consonant_length": 0.06712044775485992,
"vowel": "a",
"vowel_length": 0.148829385638237,
"pitch": 6.063965797424316
},
{
"text": "イ",
"consonant": null,
"consonant_length": null,
"vowel": "i",
"vowel_length": 0.11061104387044907,
"pitch": 6.040698051452637
},
{
"text": "エ",
"consonant": null,
"consonant_length": null,
"vowel": "e",
"vowel_length": 0.13046696782112122,
"pitch": 5.806027889251709
}
],
"accent": 1,
"pause_mora": null,
"is_interrogative": false
},
{
"moras": [
{
"text": "ヨ",
"consonant": "y",
"consonant_length": 0.07194744795560837,
"vowel": "o",
"vowel_length": 0.08622600883245468,
"pitch": 5.694094657897949
},
{
"text": "オ",
"consonant": null,
"consonant_length": null,
"vowel": "o",
"vowel_length": 0.10635452717542648,
"pitch": 5.787222385406494
},
{
"text": "コ",
"consonant": "k",
"consonant_length": 0.07077334076166153,
"vowel": "o",
"vowel_length": 0.09248624742031097,
"pitch": 5.793357849121094
},
{
"text": "ソ",
"consonant": "s",
"consonant_length": 0.08705667406320572,
"vowel": "o",
"vowel_length": 0.2238258570432663,
"pitch": 5.643765449523926
}
],
"accent": 1,
"pause_mora": null,
"is_interrogative": false
}
],
"speedScale": 1.0,
"pitchScale": 0.0,
"intonationScale": 1.0,
"volumeScale": 1.0,
"prePhonemeLength": 0.1,
"postPhonemeLength": 0.1,
"pauseLength": null,
"pauseLengthScale": 1.0,
"outputSamplingRate": 24000,
"outputStereo": false,
"kana": "コンニチワ'、オンセエゴ'オセエノ/セ'カイエ/ヨ'オコソ"
}

View File

@@ -0,0 +1,27 @@
use axum::{
http::StatusCode,
response::{IntoResponse, Response},
};
pub type AppResult<T> = std::result::Result<T, AppError>;
pub struct AppError(anyhow::Error);
impl IntoResponse for AppError {
fn into_response(self) -> Response {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Something went wrong: {}", self.0),
)
.into_response()
}
}
impl<E> From<E> for AppError
where
E: Into<anyhow::Error>,
{
fn from(err: E) -> Self {
Self(err.into())
}
}

View File

@@ -0,0 +1,197 @@
use axum::extract::State;
use axum::{
extract::Query,
http::header::CONTENT_TYPE,
response::IntoResponse,
routing::{get, post},
Json, Router,
};
use sbv2_core::tts_util::kata_tone2phone_tone;
use sbv2_core::{
tts::{SynthesizeOptions, TTSModelHolder},
tts_util::preprocess_parse_text,
};
use serde::{Deserialize, Serialize};
use tokio::{fs, net::TcpListener, sync::Mutex};
use std::env;
use std::sync::Arc;
use error::AppResult;
mod error;
#[derive(Deserialize)]
struct RequestCreateAudioQuery {
text: String,
}
#[derive(Serialize, Deserialize)]
struct AudioQuery {
kana: String,
tone: i32,
}
#[derive(Serialize)]
struct ResponseCreateAudioQuery {
audio_query: Vec<AudioQuery>,
text: String,
}
async fn create_audio_query(
State(state): State<AppState>,
Query(request): Query<RequestCreateAudioQuery>,
) -> AppResult<impl IntoResponse> {
let (text, process) = {
let tts_model = state.tts_model.lock().await;
preprocess_parse_text(&request.text, &tts_model.jtalk)?
};
let kana_tone_list = process.g2kana_tone()?;
let audio_query = kana_tone_list
.iter()
.map(|(kana, tone)| AudioQuery {
kana: kana.clone(),
tone: *tone,
})
.collect::<Vec<_>>();
Ok(Json(ResponseCreateAudioQuery { audio_query, text }))
}
#[derive(Deserialize)]
pub struct RequestSynthesis {
text: String,
speaker_id: i64,
sdp_ratio: f32,
length_scale: f32,
style_id: i32,
audio_query: Vec<AudioQuery>,
ident: String,
}
async fn synthesis(
State(state): State<AppState>,
Json(request): Json<RequestSynthesis>,
) -> AppResult<impl IntoResponse> {
let phone_tone = request
.audio_query
.iter()
.map(|query| (query.kana.clone(), query.tone))
.collect::<Vec<_>>();
let phone_tone = kata_tone2phone_tone(phone_tone);
let tones = phone_tone.iter().map(|(_, tone)| *tone).collect::<Vec<_>>();
let buffer = {
let mut tts_model = state.tts_model.lock().await;
tts_model.easy_synthesize_neo(
&request.ident,
&request.text,
Some(tones),
request.style_id,
request.speaker_id,
SynthesizeOptions {
sdp_ratio: request.sdp_ratio,
length_scale: request.length_scale,
..Default::default()
},
)?
};
Ok(([(CONTENT_TYPE, "audio/wav")], buffer))
}
#[derive(Clone)]
struct AppState {
tts_model: Arc<Mutex<TTSModelHolder>>,
}
impl AppState {
pub async fn new() -> anyhow::Result<Self> {
let mut tts_model = TTSModelHolder::new(
&fs::read(env::var("BERT_MODEL_PATH")?).await?,
&fs::read(env::var("TOKENIZER_PATH")?).await?,
env::var("HOLDER_MAX_LOADED_MODElS")
.ok()
.and_then(|x| x.parse().ok()),
)?;
let models = env::var("MODELS_PATH").unwrap_or("models".to_string());
let mut f = fs::read_dir(&models).await?;
let mut entries = vec![];
while let Ok(Some(e)) = f.next_entry().await {
let name = e.file_name().to_string_lossy().to_string();
if name.ends_with(".onnx") && name.starts_with("model_") {
let name_len = name.len();
let name = name.chars();
entries.push(
name.collect::<Vec<_>>()[6..name_len - 5]
.iter()
.collect::<String>(),
);
} else if name.ends_with(".sbv2") {
let entry = &name[..name.len() - 5];
log::info!("Try loading: {entry}");
let sbv2_bytes = match fs::read(format!("{models}/{entry}.sbv2")).await {
Ok(b) => b,
Err(e) => {
log::warn!("Error loading sbv2_bytes from file {entry}: {e}");
continue;
}
};
if let Err(e) = tts_model.load_sbv2file(entry, sbv2_bytes) {
log::warn!("Error loading {entry}: {e}");
};
log::info!("Loaded: {entry}");
} else if name.ends_with(".aivmx") {
let entry = &name[..name.len() - 6];
log::info!("Try loading: {entry}");
let aivmx_bytes = match fs::read(format!("{models}/{entry}.aivmx")).await {
Ok(b) => b,
Err(e) => {
log::warn!("Error loading aivmx bytes from file {entry}: {e}");
continue;
}
};
if let Err(e) = tts_model.load_aivmx(entry, aivmx_bytes) {
log::error!("Error loading {entry}: {e}");
}
log::info!("Loaded: {entry}");
}
}
for entry in entries {
log::info!("Try loading: {entry}");
let style_vectors_bytes =
match fs::read(format!("{models}/style_vectors_{entry}.json")).await {
Ok(b) => b,
Err(e) => {
log::warn!("Error loading style_vectors_bytes from file {entry}: {e}");
continue;
}
};
let vits2_bytes = match fs::read(format!("{models}/model_{entry}.onnx")).await {
Ok(b) => b,
Err(e) => {
log::warn!("Error loading vits2_bytes from file {entry}: {e}");
continue;
}
};
if let Err(e) = tts_model.load(&entry, style_vectors_bytes, vits2_bytes) {
log::warn!("Error loading {entry}: {e}");
};
log::info!("Loaded: {entry}");
}
Ok(Self {
tts_model: Arc::new(Mutex::new(tts_model)),
})
}
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
dotenvy::dotenv_override().ok();
env_logger::init();
let app = Router::new()
.route("/", get(|| async { "Hello, world!" }))
.route("/audio_query", get(create_audio_query))
.route("/synthesis", post(synthesis))
.with_state(AppState::new().await?);
let listener = TcpListener::bind("0.0.0.0:8080").await?;
axum::serve(listener, app).await?;
Ok(())
}

View File

@@ -1,7 +1,12 @@
[package]
name = "sbv2_wasm"
version = "0.1.0"
edition = "2021"
version.workspace = true
edition.workspace = true
description.workspace = true
readme.workspace = true
repository.workspace = true
documentation.workspace = true
license.workspace = true
[lib]
crate-type = ["cdylib", "rlib"]
@@ -13,7 +18,3 @@ once_cell.workspace = true
js-sys = "0.3.70"
ndarray.workspace = true
wasm-bindgen-futures = "0.4.43"
[profile.release]
lto = true
opt-level = "s"

View File

@@ -0,0 +1,2 @@
# StyleBertVITS2 wasm
refer to https://github.com/neodyland/sbv2-api

8
crates/sbv2_wasm/build.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/bin/sh
wasm-pack build --target web ./crates/sbv2_wasm --release
wasm-opt -O3 -o ./crates/sbv2_wasm/pkg/sbv2_wasm_bg.wasm ./crates/sbv2_wasm/pkg/sbv2_wasm_bg.wasm
wasm-strip ./crates/sbv2_wasm/pkg/sbv2_wasm_bg.wasm
mkdir -p ./crates/sbv2_wasm/dist
cp ./crates/sbv2_wasm/pkg/sbv2_wasm_bg.wasm ./crates/sbv2_wasm/dist/sbv2_wasm_bg.wasm
cd ./crates/sbv2_wasm
pnpm build

View File

@@ -3,9 +3,12 @@ import fs from "node:fs/promises";
ModelHolder.globalInit(await fs.readFile("./dist/sbv2_wasm_bg.wasm"));
const holder = await ModelHolder.create(
(await fs.readFile("../models/tokenizer.json")).toString("utf-8"),
await fs.readFile("../models/deberta.onnx"),
(await fs.readFile("../../models/tokenizer.json")).toString("utf-8"),
await fs.readFile("../../models/deberta.onnx"),
);
await holder.load(
"tsukuyomi",
await fs.readFile("../../models/tsukuyomi.sbv2"),
);
await holder.load("tsukuyomi", await fs.readFile("../models/iroha2.sbv2"));
await fs.writeFile("out.wav", await holder.synthesize("tsukuyomi", "おはよう"));
holder.unload("tsukuyomi");

View File

@@ -1,6 +1,6 @@
{
"name": "sbv2",
"version": "0.1.1",
"version": "0.2.0-alpha6",
"description": "Style Bert VITS2 wasm",
"main": "dist/index.js",
"types": "dist/index.d.ts",
@@ -11,15 +11,16 @@
},
"keywords": [],
"author": "tuna2134",
"contributes": ["neodyland"],
"license": "MIT",
"devDependencies": {
"@biomejs/biome": "^1.9.2",
"@types/node": "^22.7.4",
"esbuild": "^0.24.0",
"typescript": "^5.6.2"
"@biomejs/biome": "^1.9.4",
"@types/node": "^22.13.5",
"esbuild": "^0.25.0",
"typescript": "^5.7.3"
},
"dependencies": {
"onnxruntime-web": "^1.19.2"
"onnxruntime-web": "^1.20.1"
},
"files": ["dist/*", "package.json", "README.md"]
"files": ["dist/*", "package.json", "README.md", "pkg/*.ts", "pkg/*.js"]
}

View File

@@ -9,21 +9,21 @@ importers:
.:
dependencies:
onnxruntime-web:
specifier: ^1.19.2
version: 1.20.0
specifier: ^1.20.1
version: 1.20.1
devDependencies:
'@biomejs/biome':
specifier: ^1.9.2
specifier: ^1.9.4
version: 1.9.4
'@types/node':
specifier: ^22.7.4
version: 22.8.0
specifier: ^22.13.5
version: 22.13.5
esbuild:
specifier: ^0.24.0
version: 0.24.0
specifier: ^0.25.0
version: 0.25.0
typescript:
specifier: ^5.6.2
version: 5.6.3
specifier: ^5.7.3
version: 5.8.3
packages:
@@ -80,146 +80,152 @@ packages:
cpu: [x64]
os: [win32]
'@esbuild/aix-ppc64@0.24.0':
resolution: {integrity: sha512-WtKdFM7ls47zkKHFVzMz8opM7LkcsIp9amDUBIAWirg70RM71WRSjdILPsY5Uv1D42ZpUfaPILDlfactHgsRkw==}
'@esbuild/aix-ppc64@0.25.0':
resolution: {integrity: sha512-O7vun9Sf8DFjH2UtqK8Ku3LkquL9SZL8OLY1T5NZkA34+wG3OQF7cl4Ql8vdNzM6fzBbYfLaiRLIOZ+2FOCgBQ==}
engines: {node: '>=18'}
cpu: [ppc64]
os: [aix]
'@esbuild/android-arm64@0.24.0':
resolution: {integrity: sha512-Vsm497xFM7tTIPYK9bNTYJyF/lsP590Qc1WxJdlB6ljCbdZKU9SY8i7+Iin4kyhV/KV5J2rOKsBQbB77Ab7L/w==}
'@esbuild/android-arm64@0.25.0':
resolution: {integrity: sha512-grvv8WncGjDSyUBjN9yHXNt+cq0snxXbDxy5pJtzMKGmmpPxeAmAhWxXI+01lU5rwZomDgD3kJwulEnhTRUd6g==}
engines: {node: '>=18'}
cpu: [arm64]
os: [android]
'@esbuild/android-arm@0.24.0':
resolution: {integrity: sha512-arAtTPo76fJ/ICkXWetLCc9EwEHKaeya4vMrReVlEIUCAUncH7M4bhMQ+M9Vf+FFOZJdTNMXNBrWwW+OXWpSew==}
'@esbuild/android-arm@0.25.0':
resolution: {integrity: sha512-PTyWCYYiU0+1eJKmw21lWtC+d08JDZPQ5g+kFyxP0V+es6VPPSUhM6zk8iImp2jbV6GwjX4pap0JFbUQN65X1g==}
engines: {node: '>=18'}
cpu: [arm]
os: [android]
'@esbuild/android-x64@0.24.0':
resolution: {integrity: sha512-t8GrvnFkiIY7pa7mMgJd7p8p8qqYIz1NYiAoKc75Zyv73L3DZW++oYMSHPRarcotTKuSs6m3hTOa5CKHaS02TQ==}
'@esbuild/android-x64@0.25.0':
resolution: {integrity: sha512-m/ix7SfKG5buCnxasr52+LI78SQ+wgdENi9CqyCXwjVR2X4Jkz+BpC3le3AoBPYTC9NHklwngVXvbJ9/Akhrfg==}
engines: {node: '>=18'}
cpu: [x64]
os: [android]
'@esbuild/darwin-arm64@0.24.0':
resolution: {integrity: sha512-CKyDpRbK1hXwv79soeTJNHb5EiG6ct3efd/FTPdzOWdbZZfGhpbcqIpiD0+vwmpu0wTIL97ZRPZu8vUt46nBSw==}
'@esbuild/darwin-arm64@0.25.0':
resolution: {integrity: sha512-mVwdUb5SRkPayVadIOI78K7aAnPamoeFR2bT5nszFUZ9P8UpK4ratOdYbZZXYSqPKMHfS1wdHCJk1P1EZpRdvw==}
engines: {node: '>=18'}
cpu: [arm64]
os: [darwin]
'@esbuild/darwin-x64@0.24.0':
resolution: {integrity: sha512-rgtz6flkVkh58od4PwTRqxbKH9cOjaXCMZgWD905JOzjFKW+7EiUObfd/Kav+A6Gyud6WZk9w+xu6QLytdi2OA==}
'@esbuild/darwin-x64@0.25.0':
resolution: {integrity: sha512-DgDaYsPWFTS4S3nWpFcMn/33ZZwAAeAFKNHNa1QN0rI4pUjgqf0f7ONmXf6d22tqTY+H9FNdgeaAa+YIFUn2Rg==}
engines: {node: '>=18'}
cpu: [x64]
os: [darwin]
'@esbuild/freebsd-arm64@0.24.0':
resolution: {integrity: sha512-6Mtdq5nHggwfDNLAHkPlyLBpE5L6hwsuXZX8XNmHno9JuL2+bg2BX5tRkwjyfn6sKbxZTq68suOjgWqCicvPXA==}
'@esbuild/freebsd-arm64@0.25.0':
resolution: {integrity: sha512-VN4ocxy6dxefN1MepBx/iD1dH5K8qNtNe227I0mnTRjry8tj5MRk4zprLEdG8WPyAPb93/e4pSgi1SoHdgOa4w==}
engines: {node: '>=18'}
cpu: [arm64]
os: [freebsd]
'@esbuild/freebsd-x64@0.24.0':
resolution: {integrity: sha512-D3H+xh3/zphoX8ck4S2RxKR6gHlHDXXzOf6f/9dbFt/NRBDIE33+cVa49Kil4WUjxMGW0ZIYBYtaGCa2+OsQwQ==}
'@esbuild/freebsd-x64@0.25.0':
resolution: {integrity: sha512-mrSgt7lCh07FY+hDD1TxiTyIHyttn6vnjesnPoVDNmDfOmggTLXRv8Id5fNZey1gl/V2dyVK1VXXqVsQIiAk+A==}
engines: {node: '>=18'}
cpu: [x64]
os: [freebsd]
'@esbuild/linux-arm64@0.24.0':
resolution: {integrity: sha512-TDijPXTOeE3eaMkRYpcy3LarIg13dS9wWHRdwYRnzlwlA370rNdZqbcp0WTyyV/k2zSxfko52+C7jU5F9Tfj1g==}
'@esbuild/linux-arm64@0.25.0':
resolution: {integrity: sha512-9QAQjTWNDM/Vk2bgBl17yWuZxZNQIF0OUUuPZRKoDtqF2k4EtYbpyiG5/Dk7nqeK6kIJWPYldkOcBqjXjrUlmg==}
engines: {node: '>=18'}
cpu: [arm64]
os: [linux]
'@esbuild/linux-arm@0.24.0':
resolution: {integrity: sha512-gJKIi2IjRo5G6Glxb8d3DzYXlxdEj2NlkixPsqePSZMhLudqPhtZ4BUrpIuTjJYXxvF9njql+vRjB2oaC9XpBw==}
'@esbuild/linux-arm@0.25.0':
resolution: {integrity: sha512-vkB3IYj2IDo3g9xX7HqhPYxVkNQe8qTK55fraQyTzTX/fxaDtXiEnavv9geOsonh2Fd2RMB+i5cbhu2zMNWJwg==}
engines: {node: '>=18'}
cpu: [arm]
os: [linux]
'@esbuild/linux-ia32@0.24.0':
resolution: {integrity: sha512-K40ip1LAcA0byL05TbCQ4yJ4swvnbzHscRmUilrmP9Am7//0UjPreh4lpYzvThT2Quw66MhjG//20mrufm40mA==}
'@esbuild/linux-ia32@0.25.0':
resolution: {integrity: sha512-43ET5bHbphBegyeqLb7I1eYn2P/JYGNmzzdidq/w0T8E2SsYL1U6un2NFROFRg1JZLTzdCoRomg8Rvf9M6W6Gg==}
engines: {node: '>=18'}
cpu: [ia32]
os: [linux]
'@esbuild/linux-loong64@0.24.0':
resolution: {integrity: sha512-0mswrYP/9ai+CU0BzBfPMZ8RVm3RGAN/lmOMgW4aFUSOQBjA31UP8Mr6DDhWSuMwj7jaWOT0p0WoZ6jeHhrD7g==}
'@esbuild/linux-loong64@0.25.0':
resolution: {integrity: sha512-fC95c/xyNFueMhClxJmeRIj2yrSMdDfmqJnyOY4ZqsALkDrrKJfIg5NTMSzVBr5YW1jf+l7/cndBfP3MSDpoHw==}
engines: {node: '>=18'}
cpu: [loong64]
os: [linux]
'@esbuild/linux-mips64el@0.24.0':
resolution: {integrity: sha512-hIKvXm0/3w/5+RDtCJeXqMZGkI2s4oMUGj3/jM0QzhgIASWrGO5/RlzAzm5nNh/awHE0A19h/CvHQe6FaBNrRA==}
'@esbuild/linux-mips64el@0.25.0':
resolution: {integrity: sha512-nkAMFju7KDW73T1DdH7glcyIptm95a7Le8irTQNO/qtkoyypZAnjchQgooFUDQhNAy4iu08N79W4T4pMBwhPwQ==}
engines: {node: '>=18'}
cpu: [mips64el]
os: [linux]
'@esbuild/linux-ppc64@0.24.0':
resolution: {integrity: sha512-HcZh5BNq0aC52UoocJxaKORfFODWXZxtBaaZNuN3PUX3MoDsChsZqopzi5UupRhPHSEHotoiptqikjN/B77mYQ==}
'@esbuild/linux-ppc64@0.25.0':
resolution: {integrity: sha512-NhyOejdhRGS8Iwv+KKR2zTq2PpysF9XqY+Zk77vQHqNbo/PwZCzB5/h7VGuREZm1fixhs4Q/qWRSi5zmAiO4Fw==}
engines: {node: '>=18'}
cpu: [ppc64]
os: [linux]
'@esbuild/linux-riscv64@0.24.0':
resolution: {integrity: sha512-bEh7dMn/h3QxeR2KTy1DUszQjUrIHPZKyO6aN1X4BCnhfYhuQqedHaa5MxSQA/06j3GpiIlFGSsy1c7Gf9padw==}
'@esbuild/linux-riscv64@0.25.0':
resolution: {integrity: sha512-5S/rbP5OY+GHLC5qXp1y/Mx//e92L1YDqkiBbO9TQOvuFXM+iDqUNG5XopAnXoRH3FjIUDkeGcY1cgNvnXp/kA==}
engines: {node: '>=18'}
cpu: [riscv64]
os: [linux]
'@esbuild/linux-s390x@0.24.0':
resolution: {integrity: sha512-ZcQ6+qRkw1UcZGPyrCiHHkmBaj9SiCD8Oqd556HldP+QlpUIe2Wgn3ehQGVoPOvZvtHm8HPx+bH20c9pvbkX3g==}
'@esbuild/linux-s390x@0.25.0':
resolution: {integrity: sha512-XM2BFsEBz0Fw37V0zU4CXfcfuACMrppsMFKdYY2WuTS3yi8O1nFOhil/xhKTmE1nPmVyvQJjJivgDT+xh8pXJA==}
engines: {node: '>=18'}
cpu: [s390x]
os: [linux]
'@esbuild/linux-x64@0.24.0':
resolution: {integrity: sha512-vbutsFqQ+foy3wSSbmjBXXIJ6PL3scghJoM8zCL142cGaZKAdCZHyf+Bpu/MmX9zT9Q0zFBVKb36Ma5Fzfa8xA==}
'@esbuild/linux-x64@0.25.0':
resolution: {integrity: sha512-9yl91rHw/cpwMCNytUDxwj2XjFpxML0y9HAOH9pNVQDpQrBxHy01Dx+vaMu0N1CKa/RzBD2hB4u//nfc+Sd3Cw==}
engines: {node: '>=18'}
cpu: [x64]
os: [linux]
'@esbuild/netbsd-x64@0.24.0':
resolution: {integrity: sha512-hjQ0R/ulkO8fCYFsG0FZoH+pWgTTDreqpqY7UnQntnaKv95uP5iW3+dChxnx7C3trQQU40S+OgWhUVwCjVFLvg==}
'@esbuild/netbsd-arm64@0.25.0':
resolution: {integrity: sha512-RuG4PSMPFfrkH6UwCAqBzauBWTygTvb1nxWasEJooGSJ/NwRw7b2HOwyRTQIU97Hq37l3npXoZGYMy3b3xYvPw==}
engines: {node: '>=18'}
cpu: [arm64]
os: [netbsd]
'@esbuild/netbsd-x64@0.25.0':
resolution: {integrity: sha512-jl+qisSB5jk01N5f7sPCsBENCOlPiS/xptD5yxOx2oqQfyourJwIKLRA2yqWdifj3owQZCL2sn6o08dBzZGQzA==}
engines: {node: '>=18'}
cpu: [x64]
os: [netbsd]
'@esbuild/openbsd-arm64@0.24.0':
resolution: {integrity: sha512-MD9uzzkPQbYehwcN583yx3Tu5M8EIoTD+tUgKF982WYL9Pf5rKy9ltgD0eUgs8pvKnmizxjXZyLt0z6DC3rRXg==}
'@esbuild/openbsd-arm64@0.25.0':
resolution: {integrity: sha512-21sUNbq2r84YE+SJDfaQRvdgznTD8Xc0oc3p3iW/a1EVWeNj/SdUCbm5U0itZPQYRuRTW20fPMWMpcrciH2EJw==}
engines: {node: '>=18'}
cpu: [arm64]
os: [openbsd]
'@esbuild/openbsd-x64@0.24.0':
resolution: {integrity: sha512-4ir0aY1NGUhIC1hdoCzr1+5b43mw99uNwVzhIq1OY3QcEwPDO3B7WNXBzaKY5Nsf1+N11i1eOfFcq+D/gOS15Q==}
'@esbuild/openbsd-x64@0.25.0':
resolution: {integrity: sha512-2gwwriSMPcCFRlPlKx3zLQhfN/2WjJ2NSlg5TKLQOJdV0mSxIcYNTMhk3H3ulL/cak+Xj0lY1Ym9ysDV1igceg==}
engines: {node: '>=18'}
cpu: [x64]
os: [openbsd]
'@esbuild/sunos-x64@0.24.0':
resolution: {integrity: sha512-jVzdzsbM5xrotH+W5f1s+JtUy1UWgjU0Cf4wMvffTB8m6wP5/kx0KiaLHlbJO+dMgtxKV8RQ/JvtlFcdZ1zCPA==}
'@esbuild/sunos-x64@0.25.0':
resolution: {integrity: sha512-bxI7ThgLzPrPz484/S9jLlvUAHYMzy6I0XiU1ZMeAEOBcS0VePBFxh1JjTQt3Xiat5b6Oh4x7UC7IwKQKIJRIg==}
engines: {node: '>=18'}
cpu: [x64]
os: [sunos]
'@esbuild/win32-arm64@0.24.0':
resolution: {integrity: sha512-iKc8GAslzRpBytO2/aN3d2yb2z8XTVfNV0PjGlCxKo5SgWmNXx82I/Q3aG1tFfS+A2igVCY97TJ8tnYwpUWLCA==}
'@esbuild/win32-arm64@0.25.0':
resolution: {integrity: sha512-ZUAc2YK6JW89xTbXvftxdnYy3m4iHIkDtK3CLce8wg8M2L+YZhIvO1DKpxrd0Yr59AeNNkTiic9YLf6FTtXWMw==}
engines: {node: '>=18'}
cpu: [arm64]
os: [win32]
'@esbuild/win32-ia32@0.24.0':
resolution: {integrity: sha512-vQW36KZolfIudCcTnaTpmLQ24Ha1RjygBo39/aLkM2kmjkWmZGEJ5Gn9l5/7tzXA42QGIoWbICfg6KLLkIw6yw==}
'@esbuild/win32-ia32@0.25.0':
resolution: {integrity: sha512-eSNxISBu8XweVEWG31/JzjkIGbGIJN/TrRoiSVZwZ6pkC6VX4Im/WV2cz559/TXLcYbcrDN8JtKgd9DJVIo8GA==}
engines: {node: '>=18'}
cpu: [ia32]
os: [win32]
'@esbuild/win32-x64@0.24.0':
resolution: {integrity: sha512-7IAFPrjSQIJrGsK6flwg7NFmwBoSTyF3rl7If0hNUFQU4ilTsEPL6GuMuU9BfIWVVGuRnuIidkSMC+c0Otu8IA==}
'@esbuild/win32-x64@0.25.0':
resolution: {integrity: sha512-ZENoHJBxA20C2zFzh6AI4fT6RraMzjYw4xKWemRTRmRVtN9c5DcH9r/f2ihEkMjOW5eGgrwCslG/+Y/3bL+DHQ==}
engines: {node: '>=18'}
cpu: [x64]
os: [win32]
@@ -254,11 +260,11 @@ packages:
'@protobufjs/utf8@1.1.0':
resolution: {integrity: sha512-Vvn3zZrhQZkkBE8LSuW3em98c0FwgO4nxzv6OdSxPKJIEKY2bGbHn+mhGIPerzI4twdxaP8/0+06HBpwf345Lw==}
'@types/node@22.8.0':
resolution: {integrity: sha512-84rafSBHC/z1i1E3p0cJwKA+CfYDNSXX9WSZBRopjIzLET8oNt6ht2tei4C7izwDeEiLLfdeSVBv1egOH916hg==}
'@types/node@22.13.5':
resolution: {integrity: sha512-+lTU0PxZXn0Dr1NBtC7Y8cR21AJr87dLLU953CWA6pMxxv/UDc7jYAY90upcrie1nRcD6XNG5HOYEDtgW5TxAg==}
esbuild@0.24.0:
resolution: {integrity: sha512-FuLPevChGDshgSicjisSooU0cemp/sGXR841D5LHMB7mTVOmsEHcAxaH3irL53+8YDIeVNQEySh4DaYU/iuPqQ==}
esbuild@0.25.0:
resolution: {integrity: sha512-BXq5mqc8ltbaN34cDqWuYKyNhX8D/Z0J1xdtdQ8UcIIIyJyz+ZMKUt58tF3SrZ85jcfN/PZYhjR5uDQAYNVbuw==}
engines: {node: '>=18'}
hasBin: true
@@ -268,14 +274,14 @@ packages:
guid-typescript@1.0.9:
resolution: {integrity: sha512-Y8T4vYhEfwJOTbouREvG+3XDsjr8E3kIr7uf+JZ0BYloFsttiHU0WfvANVsR7TxNUJa/WpCnw/Ino/p+DeBhBQ==}
long@5.2.3:
resolution: {integrity: sha512-lcHwpNoggQTObv5apGNCTdJrO69eHOZMi4BNC+rTLER8iHAqGrUVeLh/irVIM7zTw2bOXA8T6uNPeujwOLg/2Q==}
long@5.3.1:
resolution: {integrity: sha512-ka87Jz3gcx/I7Hal94xaN2tZEOPoUOEVftkQqZx2EeQRN7LGdfLlI3FvZ+7WDplm+vK2Urx9ULrvSowtdCieng==}
onnxruntime-common@1.20.0:
resolution: {integrity: sha512-9ehS4ul5fBszIcHhfxuDgk45lO+Fqrxmrgwk1Pxb1JRvbQiCB/v9Royv95SRCWHktLMviqNjBsEd/biJhd39cg==}
onnxruntime-common@1.20.1:
resolution: {integrity: sha512-YiU0s0IzYYC+gWvqD1HzLc46Du1sXpSiwzKb63PACIJr6LfL27VsXSXQvt68EzD3V0D5Bc0vyJTjmMxp0ylQiw==}
onnxruntime-web@1.20.0:
resolution: {integrity: sha512-IoUf8dqHFJLV4DUSz+Ok+xxyN6cQk57gb20m6PZE5gag3QXuvegYMq9dG8t/QF4JjTKIwvfvnr16ouzCCB9IMA==}
onnxruntime-web@1.20.1:
resolution: {integrity: sha512-TePF6XVpLL1rWVMIl5Y9ACBQcyCNFThZON/jgElNd9Txb73CIEGlklhYR3UEr1cp5r0rbGI6nDwwrs79g7WjoA==}
platform@1.3.6:
resolution: {integrity: sha512-fnWVljUchTro6RiCFvCXBbNhJc2NijN7oIQxbwsyL0buWJPG85v81ehlHI9fXrJsMNgTofEoWIQeClKpgxFLrg==}
@@ -284,13 +290,13 @@ packages:
resolution: {integrity: sha512-mRUWCc3KUU4w1jU8sGxICXH/gNS94DvI1gxqDvBzhj1JpcsimQkYiOJfwsPUykUI5ZaspFbSgmBLER8IrQ3tqw==}
engines: {node: '>=12.0.0'}
typescript@5.6.3:
resolution: {integrity: sha512-hjcS1mhfuyi4WW8IWtjP7brDrG2cuDZukyrYrSauoXGNgx0S7zceP07adYkJycEr56BOUTNPzbInooiN3fn1qw==}
typescript@5.8.3:
resolution: {integrity: sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ==}
engines: {node: '>=14.17'}
hasBin: true
undici-types@6.19.8:
resolution: {integrity: sha512-ve2KP6f/JnbPBFyobGHuerC9g1FYGn/F8n1LWTwNxCEzd6IfqTwUQcNXgEtmmQ6DlRrC1hrSrBnCZPokRrDHjw==}
undici-types@6.20.0:
resolution: {integrity: sha512-Ny6QZ2Nju20vw1SRHe3d9jVu6gJ+4e3+MMpqu7pqE5HT6WsTSlce++GQmK5UXS8mzV8DSYHrQH+Xrf2jVcuKNg==}
snapshots:
@@ -329,76 +335,79 @@ snapshots:
'@biomejs/cli-win32-x64@1.9.4':
optional: true
'@esbuild/aix-ppc64@0.24.0':
'@esbuild/aix-ppc64@0.25.0':
optional: true
'@esbuild/android-arm64@0.24.0':
'@esbuild/android-arm64@0.25.0':
optional: true
'@esbuild/android-arm@0.24.0':
'@esbuild/android-arm@0.25.0':
optional: true
'@esbuild/android-x64@0.24.0':
'@esbuild/android-x64@0.25.0':
optional: true
'@esbuild/darwin-arm64@0.24.0':
'@esbuild/darwin-arm64@0.25.0':
optional: true
'@esbuild/darwin-x64@0.24.0':
'@esbuild/darwin-x64@0.25.0':
optional: true
'@esbuild/freebsd-arm64@0.24.0':
'@esbuild/freebsd-arm64@0.25.0':
optional: true
'@esbuild/freebsd-x64@0.24.0':
'@esbuild/freebsd-x64@0.25.0':
optional: true
'@esbuild/linux-arm64@0.24.0':
'@esbuild/linux-arm64@0.25.0':
optional: true
'@esbuild/linux-arm@0.24.0':
'@esbuild/linux-arm@0.25.0':
optional: true
'@esbuild/linux-ia32@0.24.0':
'@esbuild/linux-ia32@0.25.0':
optional: true
'@esbuild/linux-loong64@0.24.0':
'@esbuild/linux-loong64@0.25.0':
optional: true
'@esbuild/linux-mips64el@0.24.0':
'@esbuild/linux-mips64el@0.25.0':
optional: true
'@esbuild/linux-ppc64@0.24.0':
'@esbuild/linux-ppc64@0.25.0':
optional: true
'@esbuild/linux-riscv64@0.24.0':
'@esbuild/linux-riscv64@0.25.0':
optional: true
'@esbuild/linux-s390x@0.24.0':
'@esbuild/linux-s390x@0.25.0':
optional: true
'@esbuild/linux-x64@0.24.0':
'@esbuild/linux-x64@0.25.0':
optional: true
'@esbuild/netbsd-x64@0.24.0':
'@esbuild/netbsd-arm64@0.25.0':
optional: true
'@esbuild/openbsd-arm64@0.24.0':
'@esbuild/netbsd-x64@0.25.0':
optional: true
'@esbuild/openbsd-x64@0.24.0':
'@esbuild/openbsd-arm64@0.25.0':
optional: true
'@esbuild/sunos-x64@0.24.0':
'@esbuild/openbsd-x64@0.25.0':
optional: true
'@esbuild/win32-arm64@0.24.0':
'@esbuild/sunos-x64@0.25.0':
optional: true
'@esbuild/win32-ia32@0.24.0':
'@esbuild/win32-arm64@0.25.0':
optional: true
'@esbuild/win32-x64@0.24.0':
'@esbuild/win32-ia32@0.25.0':
optional: true
'@esbuild/win32-x64@0.25.0':
optional: true
'@protobufjs/aspromise@1.1.2': {}
@@ -424,51 +433,52 @@ snapshots:
'@protobufjs/utf8@1.1.0': {}
'@types/node@22.8.0':
'@types/node@22.13.5':
dependencies:
undici-types: 6.19.8
undici-types: 6.20.0
esbuild@0.24.0:
esbuild@0.25.0:
optionalDependencies:
'@esbuild/aix-ppc64': 0.24.0
'@esbuild/android-arm': 0.24.0
'@esbuild/android-arm64': 0.24.0
'@esbuild/android-x64': 0.24.0
'@esbuild/darwin-arm64': 0.24.0
'@esbuild/darwin-x64': 0.24.0
'@esbuild/freebsd-arm64': 0.24.0
'@esbuild/freebsd-x64': 0.24.0
'@esbuild/linux-arm': 0.24.0
'@esbuild/linux-arm64': 0.24.0
'@esbuild/linux-ia32': 0.24.0
'@esbuild/linux-loong64': 0.24.0
'@esbuild/linux-mips64el': 0.24.0
'@esbuild/linux-ppc64': 0.24.0
'@esbuild/linux-riscv64': 0.24.0
'@esbuild/linux-s390x': 0.24.0
'@esbuild/linux-x64': 0.24.0
'@esbuild/netbsd-x64': 0.24.0
'@esbuild/openbsd-arm64': 0.24.0
'@esbuild/openbsd-x64': 0.24.0
'@esbuild/sunos-x64': 0.24.0
'@esbuild/win32-arm64': 0.24.0
'@esbuild/win32-ia32': 0.24.0
'@esbuild/win32-x64': 0.24.0
'@esbuild/aix-ppc64': 0.25.0
'@esbuild/android-arm': 0.25.0
'@esbuild/android-arm64': 0.25.0
'@esbuild/android-x64': 0.25.0
'@esbuild/darwin-arm64': 0.25.0
'@esbuild/darwin-x64': 0.25.0
'@esbuild/freebsd-arm64': 0.25.0
'@esbuild/freebsd-x64': 0.25.0
'@esbuild/linux-arm': 0.25.0
'@esbuild/linux-arm64': 0.25.0
'@esbuild/linux-ia32': 0.25.0
'@esbuild/linux-loong64': 0.25.0
'@esbuild/linux-mips64el': 0.25.0
'@esbuild/linux-ppc64': 0.25.0
'@esbuild/linux-riscv64': 0.25.0
'@esbuild/linux-s390x': 0.25.0
'@esbuild/linux-x64': 0.25.0
'@esbuild/netbsd-arm64': 0.25.0
'@esbuild/netbsd-x64': 0.25.0
'@esbuild/openbsd-arm64': 0.25.0
'@esbuild/openbsd-x64': 0.25.0
'@esbuild/sunos-x64': 0.25.0
'@esbuild/win32-arm64': 0.25.0
'@esbuild/win32-ia32': 0.25.0
'@esbuild/win32-x64': 0.25.0
flatbuffers@1.12.0: {}
guid-typescript@1.0.9: {}
long@5.2.3: {}
long@5.3.1: {}
onnxruntime-common@1.20.0: {}
onnxruntime-common@1.20.1: {}
onnxruntime-web@1.20.0:
onnxruntime-web@1.20.1:
dependencies:
flatbuffers: 1.12.0
guid-typescript: 1.0.9
long: 5.2.3
onnxruntime-common: 1.20.0
long: 5.3.1
onnxruntime-common: 1.20.1
platform: 1.3.6
protobufjs: 7.4.0
@@ -486,9 +496,9 @@ snapshots:
'@protobufjs/path': 1.1.2
'@protobufjs/pool': 1.1.0
'@protobufjs/utf8': 1.1.0
'@types/node': 22.8.0
long: 5.2.3
'@types/node': 22.13.5
long: 5.3.1
typescript@5.6.3: {}
typescript@5.8.3: {}
undici-types@6.19.8: {}
undici-types@6.20.0: {}

View File

@@ -74,6 +74,8 @@ export class ModelHolder {
style_vec: e,
sdp_ratio: new Tensor("float32", [f]),
length_scale: new Tensor("float32", [g]),
noise_scale: new Tensor("float32", [0.677]),
noise_scale_w: new Tensor("float32", [0.8]),
})
).output;
return [new Uint32Array(res.dims), await res.getData(true)];

View File

@@ -1,9 +0,0 @@
FROM rust AS builder
WORKDIR /work
COPY . .
RUN cargo build -r --bin sbv2_api
FROM gcr.io/distroless/cc-debian12
WORKDIR /work
COPY --from=builder /work/target/release/sbv2_api /work/main
COPY --from=builder /work/target/release/*.so /work
CMD ["/work/main"]

View File

@@ -1,15 +0,0 @@
[package]
name = "sbv2_bindings"
version = "0.2.0-alpha4"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[lib]
name = "sbv2_bindings"
crate-type = ["cdylib"]
[dependencies]
anyhow.workspace = true
ndarray.workspace = true
pyo3 = { version = "0.23.0", features = ["anyhow"] }
sbv2_core = { version = "0.2.0-alpha2", path = "../sbv2_core" }

View File

@@ -1,23 +0,0 @@
use crate::error::Result;
use ndarray::{Array2, Ix2};
use ort::Session;
pub fn predict(
session: &Session,
token_ids: Vec<i64>,
attention_masks: Vec<i64>,
) -> Result<Array2<f32>> {
let outputs = session.run(
ort::inputs! {
"input_ids" => Array2::from_shape_vec((1, token_ids.len()), token_ids).unwrap(),
"attention_mask" => Array2::from_shape_vec((1, attention_masks.len()), attention_masks).unwrap(),
}?
)?;
let output = outputs["output"]
.try_extract_tensor::<f32>()?
.into_dimensionality::<Ix2>()?
.to_owned();
Ok(output)
}

View File

@@ -1,90 +0,0 @@
use crate::error::Result;
use ndarray::{array, Array1, Array2, Array3, Axis, Ix3};
use ort::{GraphOptimizationLevel, Session};
#[allow(clippy::vec_init_then_push, unused_variables)]
pub fn load_model<P: AsRef<[u8]>>(model_file: P, bert: bool) -> Result<Session> {
let mut exp = Vec::new();
#[cfg(feature = "tensorrt")]
{
if bert {
exp.push(
ort::TensorRTExecutionProvider::default()
.with_fp16(true)
.with_profile_min_shapes("input_ids:1x1,attention_mask:1x1")
.with_profile_max_shapes("input_ids:1x100,attention_mask:1x100")
.with_profile_opt_shapes("input_ids:1x25,attention_mask:1x25")
.build(),
);
}
}
#[cfg(feature = "cuda")]
{
#[allow(unused_mut)]
let mut cuda = ort::CUDAExecutionProvider::default()
.with_conv_algorithm_search(ort::CUDAExecutionProviderCuDNNConvAlgoSearch::Default);
#[cfg(feature = "cuda_tf32")]
{
cuda = cuda.with_tf32(true);
}
exp.push(cuda.build());
}
#[cfg(feature = "directml")]
{
exp.push(ort::DirectMLExecutionProvider::default().build());
}
#[cfg(feature = "coreml")]
{
exp.push(ort::CoreMLExecutionProvider::default().build());
}
exp.push(ort::CPUExecutionProvider::default().build());
Ok(Session::builder()?
.with_execution_providers(exp)?
.with_optimization_level(GraphOptimizationLevel::Level3)?
.with_intra_threads(num_cpus::get_physical())?
.with_parallel_execution(true)?
.with_inter_threads(num_cpus::get_physical())?
.commit_from_memory(model_file.as_ref())?)
}
#[allow(clippy::too_many_arguments)]
pub fn synthesize(
session: &Session,
bert_ori: Array2<f32>,
x_tst: Array1<i64>,
sid: Array1<i64>,
tones: Array1<i64>,
lang_ids: Array1<i64>,
style_vector: Array1<f32>,
sdp_ratio: f32,
length_scale: f32,
noise_scale: f32,
noise_scale_w: f32,
) -> Result<Array3<f32>> {
let bert = bert_ori.insert_axis(Axis(0));
let x_tst_lengths: Array1<i64> = array![x_tst.shape()[0] as i64];
let x_tst = x_tst.insert_axis(Axis(0));
let lang_ids = lang_ids.insert_axis(Axis(0));
let tones = tones.insert_axis(Axis(0));
let style_vector = style_vector.insert_axis(Axis(0));
let outputs = session.run(ort::inputs! {
"x_tst" => x_tst,
"x_tst_lengths" => x_tst_lengths,
"sid" => sid,
"tones" => tones,
"language" => lang_ids,
"bert" => bert,
"style_vec" => style_vector,
"sdp_ratio" => array![sdp_ratio],
"length_scale" => array![length_scale],
"noise_scale" => array![noise_scale],
"noise_scale_w" => array![noise_scale_w]
}?)?;
let audio_array = outputs["output"]
.try_extract_tensor::<f32>()?
.into_dimensionality::<Ix3>()?
.to_owned();
Ok(audio_array)
}

View File

@@ -1,2 +0,0 @@
# StyleBertVITS2 wasm
refer to https://github.com/tuna2134/sbv2-api

View File

@@ -1,4 +0,0 @@
wasm-pack build --target web sbv2_wasm
wasm-opt -O3 -o sbv2_wasm/pkg/sbv2_wasm_bg.wasm sbv2_wasm/pkg/sbv2_wasm_bg.wasm
mkdir -p sbv2_wasm/dist
cp sbv2_wasm/sbv2_wasm/pkg/sbv2_wasm_bg.wasm sbv2_wasm/dist/sbv2_wasm_bg.wasm

View File

@@ -1,51 +0,0 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Style Bert VITS2 Web</title>
<script type="importmap">
{
"imports": {
"onnxruntime-web": "https://cdn.jsdelivr.net/npm/onnxruntime-web@1.19.2/dist/ort.all.min.mjs",
"sbv2": "https://cdn.jsdelivr.net/npm/sbv2@0.1.1+esm"
}
}
</script>
<script type="module" async defer>
import { ModelHolder } from "sbv2";
await ModelHolder.globalInit(
await (
await fetch("https://esm.sh/sbv2@0.1.1/dist/sbv2_wasm_bg.wasm", { cache: "force-cache" })
).arrayBuffer(),
);
const holder = await ModelHolder.create(
await (
await fetch("/models/tokenizer.json", { cache: "force-cache" })
).text(),
await (
await fetch("/models/deberta.onnx", { cache: "force-cache" })
).arrayBuffer(),
);
if (typeof window.onready == "function") {
window.onready(holder);
}
</script>
<script type="module" async defer>
window.onready = async function (holder) {
await holder.load(
"amitaro",
await (await fetch("/models/amitaro.sbv2")).arrayBuffer(),
);
const wave = await holder.synthesize("amitaro", "おはよう");
console.log(wave);
};
</script>
</head>
<body>
<div id="root"></div>
</body>
</html>

5
scripts/.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
*.json
venv/
tmp/
*.safetensors
*.npy

View File

@@ -0,0 +1 @@
3.11

View File

@@ -5,6 +5,7 @@ from transformers import AutoModelForMaskedLM, AutoTokenizer
import torch
from torch import nn
from argparse import ArgumentParser
import os
parser = ArgumentParser()
parser.add_argument("--model", default="ku-nlp/deberta-v2-large-japanese-char-wwm")
@@ -15,7 +16,7 @@ bert_models.load_tokenizer(Languages.JP, model_name)
tokenizer = bert_models.load_tokenizer(Languages.JP)
converter = BertConverter(tokenizer)
tokenizer = converter.converted()
tokenizer.save("../models/tokenizer.json")
tokenizer.save("../../models/tokenizer.json")
class ORTDeberta(nn.Module):
@@ -42,9 +43,10 @@ inputs = AutoTokenizer.from_pretrained(model_name)(
torch.onnx.export(
model,
(inputs["input_ids"], inputs["token_type_ids"], inputs["attention_mask"]),
"../models/deberta.onnx",
"../../models/deberta.onnx",
input_names=["input_ids", "token_type_ids", "attention_mask"],
output_names=["output"],
verbose=True,
dynamic_axes={"input_ids": {1: "batch_size"}, "attention_mask": {1: "batch_size"}},
)
os.system("onnxsim ../../models/deberta.onnx ../../models/deberta.onnx")

View File

@@ -36,7 +36,7 @@ data = array.tolist()
hyper_parameters = HyperParameters.load_from_json(config_file)
out_name = hyper_parameters.model_name
with open(f"../models/style_vectors_{out_name}.json", "w") as f:
with open(f"../../models/style_vectors_{out_name}.json", "w") as f:
json.dump(
{
"data": data,
@@ -127,7 +127,7 @@ torch.onnx.export(
torch.tensor(0.6777),
torch.tensor(0.8),
),
f"../models/model_{out_name}.onnx",
f"../../models/model_{out_name}.onnx",
verbose=True,
dynamic_axes={
"x_tst": {0: "batch_size", 1: "x_tst_max_length"},
@@ -153,11 +153,11 @@ torch.onnx.export(
],
output_names=["output"],
)
os.system(f"onnxsim ../models/model_{out_name}.onnx ../models/model_{out_name}.onnx")
onnxfile = open(f"../models/model_{out_name}.onnx", "rb").read()
stylefile = open(f"../models/style_vectors_{out_name}.json", "rb").read()
os.system(f"onnxsim ../../models/model_{out_name}.onnx ../../models/model_{out_name}.onnx")
onnxfile = open(f"../../models/model_{out_name}.onnx", "rb").read()
stylefile = open(f"../../models/style_vectors_{out_name}.json", "rb").read()
version = bytes("1", "utf8")
with taropen(f"../models/tmp_{out_name}.sbv2tar", "w") as w:
with taropen(f"../../models/tmp_{out_name}.sbv2tar", "w") as w:
def add_tar(f, b):
t = TarInfo(f)
@@ -167,9 +167,9 @@ with taropen(f"../models/tmp_{out_name}.sbv2tar", "w") as w:
add_tar("version.txt", version)
add_tar("model.onnx", onnxfile)
add_tar("style_vectors.json", stylefile)
open(f"../models/{out_name}.sbv2", "wb").write(
open(f"../../models/{out_name}.sbv2", "wb").write(
ZstdCompressor(threads=-1, level=22).compress(
open(f"../models/tmp_{out_name}.sbv2tar", "rb").read()
open(f"../../models/tmp_{out_name}.sbv2tar", "rb").read()
)
)
os.unlink(f"../models/tmp_{out_name}.sbv2tar")
os.unlink(f"../../models/tmp_{out_name}.sbv2tar")

View File

@@ -0,0 +1,6 @@
git+https://github.com/neodyland/style-bert-vits2-ref
onnxsim
numpy<2
zstandard
onnxruntime
cmake<4

View File

@@ -0,0 +1,17 @@
FROM rust AS builder
WORKDIR /work
COPY . .
RUN cargo build -r --bin sbv2_api
FROM ubuntu AS upx
WORKDIR /work
RUN apt update && apt-get install -y upx binutils
COPY --from=builder /work/target/release/sbv2_api /work/main
COPY --from=builder /work/target/release/*.so /work
RUN upx --best --lzma /work/main
RUN find /work -maxdepth 1 -name "*.so" -exec strip --strip-unneeded {} +
RUN find /work -maxdepth 1 -name "*.so" -exec upx --best --lzma {} +
FROM gcr.io/distroless/cc-debian12
WORKDIR /work
COPY --from=upx /work/main /work/main
COPY --from=upx /work/*.so /work
CMD ["/work/main"]

View File

@@ -2,9 +2,16 @@ FROM rust AS builder
WORKDIR /work
COPY . .
RUN cargo build -r --bin sbv2_api -F cuda,cuda_tf32
FROM nvidia/cuda:12.3.2-cudnn9-runtime-ubuntu22.04
FROM ubuntu AS upx
WORKDIR /work
RUN apt update && apt-get install -y upx binutils
COPY --from=builder /work/target/release/sbv2_api /work/main
COPY --from=builder /work/target/release/*.so /work
RUN upx --best --lzma /work/main
RUN find /work -maxdepth 1 -name "*.so" -exec strip --strip-unneeded {} +
FROM nvidia/cuda:12.3.2-cudnn9-runtime-ubuntu22.04
WORKDIR /work
COPY --from=upx /work/main /work/main
COPY --from=upx /work/*.so /work
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/work
CMD ["/work/main"]
CMD ["/work/main"]

View File

@@ -1,3 +1,3 @@
docker run -it --rm -p 3000:3000 --name sbv2 \
-v ./models:/work/models --env-file .env \
ghcr.io/tuna2134/sbv2-api:cpu
ghcr.io/neodyland/sbv2-api:cpu

View File

@@ -1,4 +1,4 @@
docker run -it --rm -p 3000:3000 --name sbv2 \
-v ./models:/work/models --env-file .env \
--gpus all \
ghcr.io/tuna2134/sbv2-api:cuda
ghcr.io/neodyland/sbv2-api:cuda

14
scripts/make_dict.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/bash
set -e
git clone https://github.com/Aivis-Project/AivisSpeech-Engine ./scripts/tmp --filter=blob:none -n
cd ./scripts/tmp
git checkout 168b2a1144afe300b0490d9a6dd773ec6e927667 -- resources/dictionaries/*.csv
cd ../..
rm -rf ./crates/sbv2_core/src/dic
cp -r ./scripts/tmp/resources/dictionaries ./crates/sbv2_core/src/dic
rm -rf ./scripts/tmp
for file in ./crates/sbv2_core/src/dic/0*.csv; do
/usr/bin/cat "$file"
echo
done > ./crates/sbv2_core/src/all.csv
lindera build ./crates/sbv2_core/src/all.csv ./crates/sbv2_core/src/dic/all.dic -u -k ipadic

View File

@@ -16,7 +16,7 @@
"outputs": [],
"source": [
"# 必要なパッケージのインストール\n",
"!pip install sbv2_bindings\n",
"%pip install sbv2_bindings\n",
"\n",
"# 必要なモジュールのインポート\n",
"import os\n",

8
scripts/sbv2-test-api.py Normal file
View File

@@ -0,0 +1,8 @@
import requests
res = requests.post(
"http://localhost:3000/synthesize",
json={"text": "おはようございます", "ident": "tsukuyomi"},
)
with open("output.wav", "wb") as f:
f.write(res.content)

View File

@@ -3,16 +3,16 @@ from sbv2_bindings import TTSModel
def main():
print("Loading models...")
model = TTSModel.from_path("../models/debert.onnx", "../models/tokenizer.json")
model = TTSModel.from_path("./models/debert.onnx", "./models/tokenizer.json")
print("Models loaded!")
model.load_sbv2file_from_path("amitaro", "../models/amitaro.sbv2")
model.load_sbv2file_from_path("amitaro", "./models/amitaro.sbv2")
print("All setup is done!")
style_vector = model.get_style_vector("amitaro", 0, 1.0)
with open("output.wav", "wb") as f:
f.write(
model.synthesize("おはようございます。", "amitaro", style_vector, 0.0, 0.5)
model.synthesize("おはようございます。", "amitaro", 0, 0, 0.0, 0.5)
)

23
test.py
View File

@@ -1,8 +1,19 @@
import requests
res = requests.post(
"http://localhost:3000/synthesize",
json={"text": "おはようございます", "ident": "tsukuyomi"},
)
with open("output.wav", "wb") as f:
f.write(res.content)
data = (requests.get("http://localhost:8080/audio_query", params={
"text": "こんにちは、今日はいい天気ですね。",
})).json()
print(data)
data = (requests.post("http://localhost:8080/synthesis", json={
"text": data["text"],
"ident": "tsukuyomi",
"speaker_id": 0,
"style_id": 0,
"sdp_ratio": 0.5,
"length_scale": 0.5,
"audio_query": data["audio_query"],
})).content
with open("test.wav", "wb") as f:
f.write(data)