feat: dockerize app and unify query management UI
This commit is contained in:
6
.dockerignore
Normal file
6
.dockerignore
Normal file
@@ -0,0 +1,6 @@
|
||||
.git
|
||||
.DS_Store
|
||||
__pycache__
|
||||
*.pyc
|
||||
*.pyo
|
||||
*.swp
|
||||
4
.env.example
Normal file
4
.env.example
Normal file
@@ -0,0 +1,4 @@
|
||||
MYSQL_ROOT_PASSWORD=mobilemodels_root_change_me
|
||||
MYSQL_DATABASE=mobilemodels
|
||||
MYSQL_READER_USER=mobilemodels_reader
|
||||
MYSQL_READER_PASSWORD=mobilemodels_reader_change_me
|
||||
49
.github/workflows/daily-sync-upstream.yml
vendored
Normal file
49
.github/workflows/daily-sync-upstream.yml
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
name: Daily Upstream Sync
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: "0 1 * * *"
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
jobs:
|
||||
sync:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
|
||||
- name: Capture upstream commit
|
||||
id: upstream
|
||||
run: |
|
||||
echo "commit=$(git ls-remote https://github.com/KHwang9883/MobileModels.git refs/heads/master | awk '{print $1}')" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Sync upstream raw data
|
||||
run: |
|
||||
python3 tools/sync_upstream_mobilemodels.py --build-index --export-mysql-seed
|
||||
|
||||
- name: Commit changes
|
||||
env:
|
||||
UPSTREAM_COMMIT: ${{ steps.upstream.outputs.commit }}
|
||||
run: |
|
||||
git config user.name "github-actions[bot]"
|
||||
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
|
||||
git add brands misc README.md README_en.md CHANGELOG.md CHANGELOG_en.md LICENSE.txt dist/device_index.json dist/mobilemodels_mysql_seed.sql
|
||||
if git diff --cached --quiet; then
|
||||
echo "No upstream changes to commit."
|
||||
exit 0
|
||||
fi
|
||||
git commit -m "chore: sync upstream raw data ${UPSTREAM_COMMIT::7}"
|
||||
|
||||
- name: Push changes
|
||||
run: git push
|
||||
@@ -1,4 +1,6 @@
|
||||
# 更新日志
|
||||
### 2026-03-17
|
||||
- `oppo_cn` 新增 OPPO Find N6。
|
||||
### 2026-03-16
|
||||
- `xiaomi-wear` 新增 Xiaomi Watch S5。
|
||||
### 2026-03-12
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
# CHANGELOG
|
||||
### 2026-03-17
|
||||
- `oppo_global_en` Add OPPO Find N6.
|
||||
### 2026-03-12
|
||||
- `xiaomi_en` Add POCO C85x 5G.
|
||||
### 2026-03-08
|
||||
|
||||
16
Dockerfile
Normal file
16
Dockerfile
Normal file
@@ -0,0 +1,16 @@
|
||||
FROM python:3.12-slim
|
||||
|
||||
ENV PYTHONDONTWRITEBYTECODE=1 \
|
||||
PYTHONUNBUFFERED=1
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends git ca-certificates default-mysql-client \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY . /app
|
||||
|
||||
EXPOSE 8123
|
||||
|
||||
CMD ["sh", "tools/container_start.sh"]
|
||||
25
README.md
25
README.md
@@ -10,6 +10,31 @@
|
||||
|
||||
[English](README_en.md)
|
||||
|
||||
## Web UI
|
||||
|
||||
项目内置了设备查询与数据管理页面,统一通过 `docker compose` 启动:
|
||||
|
||||
```bash
|
||||
docker compose up --build -d
|
||||
```
|
||||
|
||||
打开:
|
||||
|
||||
- `http://127.0.0.1:8123/web/device_query.html`
|
||||
- `http://127.0.0.1:8123/web/brand_management.html`
|
||||
|
||||
MySQL 也会一并启动:
|
||||
|
||||
- host: `127.0.0.1`
|
||||
- port: `3306`
|
||||
- database: `mobilemodels`
|
||||
- reader user: `mobilemodels_reader`
|
||||
|
||||
如需自定义 MySQL 账号密码,可先复制 `.env.example` 为 `.env` 后再启动。
|
||||
原始数据、索引和 MySQL seed 运行时持久化在 Docker volume 中,不再回写本地工作区。
|
||||
|
||||
更多说明参见 [web/README.md](web/README.md)。
|
||||
|
||||
- ✅ 包含
|
||||
- ⏹ 仅部分包含
|
||||
- ❌ 不包含
|
||||
|
||||
25
README_en.md
25
README_en.md
@@ -10,6 +10,31 @@ Collecting device names, models and internal codenames.
|
||||
|
||||
[Issue submission](https://github.com/KHwang9883/MobileModels/issues) and [Pull Requests](https://github.com/KHwang9883/MobileModels/pulls) are welcomed if you find mistakes.
|
||||
|
||||
## Web UI
|
||||
|
||||
The project ships with device query and data management pages and now runs through `docker compose`:
|
||||
|
||||
```bash
|
||||
docker compose up --build -d
|
||||
```
|
||||
|
||||
Open:
|
||||
|
||||
- `http://127.0.0.1:8123/web/device_query.html`
|
||||
- `http://127.0.0.1:8123/web/brand_management.html`
|
||||
|
||||
MySQL is started together with the stack:
|
||||
|
||||
- host: `127.0.0.1`
|
||||
- port: `3306`
|
||||
- database: `mobilemodels`
|
||||
- reader user: `mobilemodels_reader`
|
||||
|
||||
If you want custom MySQL credentials, copy `.env.example` to `.env` before startup.
|
||||
Raw source data, rebuilt indexes, and MySQL seed files are persisted in Docker volumes instead of being written back to the local workspace at runtime.
|
||||
|
||||
More details: [web/README.md](web/README.md)
|
||||
|
||||
Unlisted brands usually not include international models.
|
||||
|
||||
| Name | Brand | Range |
|
||||
|
||||
@@ -130,6 +130,12 @@
|
||||
|
||||
`PKH120`: OPPO Find N5 卫星通信版
|
||||
|
||||
**OPPO Find N6:**
|
||||
|
||||
`PLP110`: OPPO Find N6
|
||||
|
||||
`PLP120`: OPPO Find N6 卫星通信版
|
||||
|
||||
## Reno 系列
|
||||
|
||||
**OPPO Reno:**
|
||||
|
||||
@@ -93,6 +93,10 @@
|
||||
|
||||
`CPH2671`: OPPO Find N5
|
||||
|
||||
**OPPO Find N6:**
|
||||
|
||||
`CPH2765`: OPPO Find N6
|
||||
|
||||
## Reno series
|
||||
|
||||
**OPPO Reno:**
|
||||
@@ -337,9 +341,9 @@
|
||||
|
||||
`CPH2811`: OPPO Reno15 Pro 5G / OPPO Reno15 Pro Max 5G
|
||||
|
||||
**OPPO Reno15 F / OPPO Reno15 FS / OPPO Reno15 C / OPPO Reno15 A:**
|
||||
**OPPO Reno15 F / OPPO Reno15 FS / OPPO Reno15c / OPPO Reno15 A:**
|
||||
|
||||
`CPH2801`: OPPO Reno15 F 5G / OPPO Reno15 FS 5G / OPPO Reno15 C 5G / OPPO Reno15 A
|
||||
`CPH2801`: OPPO Reno15 F 5G / OPPO Reno15 FS 5G / OPPO Reno15c 5G / OPPO Reno15 A
|
||||
|
||||
## F series
|
||||
|
||||
|
||||
5636
dist/device_index.json
vendored
5636
dist/device_index.json
vendored
File diff suppressed because it is too large
Load Diff
33113
dist/mobilemodels_mysql_seed.sql
vendored
Normal file
33113
dist/mobilemodels_mysql_seed.sql
vendored
Normal file
File diff suppressed because it is too large
Load Diff
52
docker-compose.yml
Normal file
52
docker-compose.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
services:
|
||||
mysql:
|
||||
image: mysql:8.4
|
||||
container_name: mobilemodels-mysql
|
||||
command:
|
||||
- --character-set-server=utf8mb4
|
||||
- --collation-server=utf8mb4_0900_ai_ci
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:-mobilemodels_root}
|
||||
MYSQL_DATABASE: ${MYSQL_DATABASE:-mobilemodels}
|
||||
ports:
|
||||
- "3306:3306"
|
||||
volumes:
|
||||
- mobilemodels_mysql_data:/var/lib/mysql
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "mysqladmin ping -h127.0.0.1 -uroot -p$$MYSQL_ROOT_PASSWORD --silent"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 30
|
||||
start_period: 20s
|
||||
restart: unless-stopped
|
||||
init: true
|
||||
|
||||
mobilemodels:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: mobilemodels-web
|
||||
working_dir: /app
|
||||
environment:
|
||||
MOBILEMODELS_DATA_ROOT: /data
|
||||
MYSQL_HOST: mysql
|
||||
MYSQL_PORT: 3306
|
||||
MYSQL_DATABASE: ${MYSQL_DATABASE:-mobilemodels}
|
||||
MYSQL_ROOT_USER: root
|
||||
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:-mobilemodels_root}
|
||||
MYSQL_READER_USER: ${MYSQL_READER_USER:-mobilemodels_reader}
|
||||
MYSQL_READER_PASSWORD: ${MYSQL_READER_PASSWORD:-mobilemodels_reader_change_me}
|
||||
depends_on:
|
||||
mysql:
|
||||
condition: service_healthy
|
||||
command: ["sh", "tools/container_start.sh"]
|
||||
ports:
|
||||
- "8123:8123"
|
||||
volumes:
|
||||
- mobilemodels_app_data:/data
|
||||
restart: unless-stopped
|
||||
init: true
|
||||
|
||||
volumes:
|
||||
mobilemodels_app_data:
|
||||
mobilemodels_mysql_data:
|
||||
372
misc/mysql-query-design.md
Normal file
372
misc/mysql-query-design.md
Normal file
@@ -0,0 +1,372 @@
|
||||
# MySQL Query Design
|
||||
|
||||
## Goal
|
||||
|
||||
让三方直接查 MySQL,同时保持查询简单、稳定、足够快。
|
||||
|
||||
当前数据规模:
|
||||
|
||||
- `device_record`: 6219
|
||||
- `lookup key`: 23463
|
||||
|
||||
这个量级不需要分库分表,也不需要 ES。MySQL 8 + 合适索引就足够。
|
||||
|
||||
## Overall Design
|
||||
|
||||
建议把 MySQL 拆成两层角色:
|
||||
|
||||
1. 离线构建层
|
||||
- 继续以 `brands/*.md` 为原始数据。
|
||||
- 通过 `tools/device_mapper.py` 生成标准化记录。
|
||||
- 通过 `tools/export_mysql_seed.py` 导出 MySQL seed SQL。
|
||||
2. 查询服务层
|
||||
- 三方只读查询 MySQL。
|
||||
- 只开放读账号,只允许 `SELECT`。
|
||||
- 最好挂在只读实例或只读副本上,不直接连主库。
|
||||
|
||||
## Tables
|
||||
|
||||
### `mm_device_record`
|
||||
|
||||
兼容视图,一条设备一行,来自 `mm_device_catalog` 聚合。
|
||||
|
||||
主要字段:
|
||||
|
||||
- `record_id`
|
||||
- `device_name`
|
||||
- `brand`
|
||||
- `manufacturer_brand`
|
||||
- `parent_brand`
|
||||
- `market_brand`
|
||||
- `device_type`
|
||||
- `source_file`
|
||||
- `section`
|
||||
- `source_rank`
|
||||
- `source_weight`
|
||||
- `aliases_json`
|
||||
|
||||
用途:
|
||||
|
||||
- 回源查看聚合后的设备信息
|
||||
- 兼容历史排查 SQL
|
||||
- 不再作为独立物理表维护
|
||||
|
||||
### `mm_device_catalog`
|
||||
|
||||
统一设备查询主表,一条设备 alias 一行。
|
||||
|
||||
主要字段:
|
||||
|
||||
- `model`
|
||||
- `alias_norm`
|
||||
- `record_id`
|
||||
- `device_name`
|
||||
- `brand`
|
||||
- `manufacturer_brand`
|
||||
- `parent_brand`
|
||||
- `market_brand`
|
||||
- `device_type`
|
||||
- `source_rank`
|
||||
- `source_weight`
|
||||
- `code`
|
||||
- `code_alias`
|
||||
- `ver_name`
|
||||
|
||||
这是当前唯一的设备物理表。
|
||||
|
||||
原因:
|
||||
|
||||
- 只需要等值查 `alias_norm`
|
||||
- 同时兼容 `mm_device_lookup` 和 `models`
|
||||
- 最适合高频只读场景
|
||||
|
||||
### `mm_device_lookup`
|
||||
|
||||
兼容视图,来自 `mm_device_catalog`。
|
||||
|
||||
### `mm_brand_lookup`
|
||||
|
||||
品牌归一化表,用于把三方传来的品牌值归到:
|
||||
|
||||
- `manufacturer_brand`
|
||||
- `parent_brand`
|
||||
- `market_brand`
|
||||
|
||||
比如:
|
||||
|
||||
- `荣耀` -> `HONOR`
|
||||
- `redmi` -> `market_brand=Redmi`
|
||||
- `xiaomi` -> `manufacturer_brand=Xiaomi`
|
||||
|
||||
### `models` / `python_services_test.models`
|
||||
|
||||
为了兼容旧查询链路,保留一层旧结构适配:
|
||||
|
||||
- `mobilemodels.models`(兼容视图)
|
||||
- `python_services_test.models`(兼容视图)
|
||||
|
||||
字段映射说明:
|
||||
|
||||
- `model`
|
||||
- 当前行可直接查询的原始设备标识
|
||||
- 一条设备的多个 alias 会展开成多行
|
||||
- `dtype`
|
||||
- 对应当前的 `device_type`
|
||||
- `brand`
|
||||
- 优先使用 `market_brand`
|
||||
- `brand_title`
|
||||
- 使用 `manufacturer_brand`
|
||||
- `code`
|
||||
- 当前设备识别出的主型号编码
|
||||
- `code_alias`
|
||||
- 其他型号编码,使用 ` | ` 拼接
|
||||
- `model_name`
|
||||
- 对应当前的 `device_name`
|
||||
- `ver_name`
|
||||
- 当前设备的人类可读别名,使用 ` | ` 拼接
|
||||
|
||||
这层主要用于兼容历史 SQL 和第三方直接查表,不建议作为后续新能力的主数据模型。
|
||||
|
||||
## Query Contract
|
||||
|
||||
三方不要直接查原始字符串,统一查归一化后的 `alias_norm`。
|
||||
|
||||
归一化规则与当前项目一致:
|
||||
|
||||
- 全部转小写
|
||||
- 只保留 `[0-9a-z\u4e00-\u9fff]`
|
||||
- 去掉空格、横线、下划线和其他标点
|
||||
|
||||
例如:
|
||||
|
||||
- `SM-G9980` -> `smg9980`
|
||||
- `iPhone14,2` -> `iphone142`
|
||||
- `NOH-AL00` -> `nohal00`
|
||||
|
||||
## Recommended SQL
|
||||
|
||||
推荐分三层:
|
||||
|
||||
1. 新接入
|
||||
- 直接查 `mobilemodels.mm_device_catalog`
|
||||
2. 兼容现有查询
|
||||
- 继续查 `mobilemodels.mm_device_lookup`
|
||||
3. 兼容历史旧结构
|
||||
- 继续查 `python_services_test.models`
|
||||
|
||||
### 1. 主推查法:按设备标识查主表
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
model,
|
||||
record_id,
|
||||
alias_norm,
|
||||
device_name,
|
||||
brand,
|
||||
manufacturer_brand,
|
||||
parent_brand,
|
||||
market_brand,
|
||||
device_type,
|
||||
source_file,
|
||||
section,
|
||||
source_rank,
|
||||
source_weight,
|
||||
code,
|
||||
code_alias,
|
||||
ver_name
|
||||
FROM mobilemodels.mm_device_catalog
|
||||
WHERE alias_norm = ?
|
||||
ORDER BY source_rank ASC, record_id ASC
|
||||
LIMIT 20;
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- `alias_norm` 是主查询键
|
||||
- `model` 是当前命中的原始设备标识
|
||||
- `code / code_alias / ver_name` 用于兼容历史字段和辅助展示
|
||||
|
||||
### 1.1 兼容查法:沿用 `mm_device_lookup`
|
||||
|
||||
如果接入方已经依赖 `mm_device_lookup`,可以不改 SQL:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
alias_norm,
|
||||
record_id,
|
||||
device_name,
|
||||
brand,
|
||||
manufacturer_brand,
|
||||
parent_brand,
|
||||
market_brand,
|
||||
device_type,
|
||||
source_file,
|
||||
section,
|
||||
source_rank,
|
||||
source_weight
|
||||
FROM mobilemodels.mm_device_lookup
|
||||
WHERE alias_norm = ?
|
||||
ORDER BY source_rank ASC, record_id ASC
|
||||
LIMIT 20;
|
||||
```
|
||||
|
||||
### 1.2 兼容旧结构查法:沿用 `python_services_test.models`
|
||||
|
||||
如果接入方仍然沿用旧表结构,可以继续查:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
model,
|
||||
dtype,
|
||||
brand,
|
||||
brand_title,
|
||||
code,
|
||||
code_alias,
|
||||
model_name,
|
||||
ver_name
|
||||
FROM python_services_test.models
|
||||
WHERE model = ?
|
||||
LIMIT 20;
|
||||
```
|
||||
|
||||
### 2. 主推查法:带品牌约束查
|
||||
|
||||
先把品牌查成归一化结果,再过滤:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
l.model,
|
||||
l.record_id,
|
||||
l.alias_norm,
|
||||
l.device_name,
|
||||
l.brand,
|
||||
l.manufacturer_brand,
|
||||
l.parent_brand,
|
||||
l.market_brand,
|
||||
l.device_type,
|
||||
l.source_file,
|
||||
l.section,
|
||||
l.source_rank,
|
||||
l.source_weight,
|
||||
l.code,
|
||||
l.code_alias,
|
||||
l.ver_name
|
||||
FROM mobilemodels.mm_device_catalog AS l
|
||||
LEFT JOIN mobilemodels.mm_brand_lookup AS b
|
||||
ON b.alias_norm = ?
|
||||
WHERE l.alias_norm = ?
|
||||
AND (
|
||||
b.alias_norm IS NULL
|
||||
OR (b.market_brand IS NOT NULL AND l.market_brand = b.market_brand)
|
||||
OR (b.manufacturer_brand IS NOT NULL AND l.manufacturer_brand = b.manufacturer_brand)
|
||||
OR (b.parent_brand IS NOT NULL AND l.parent_brand = b.parent_brand)
|
||||
)
|
||||
ORDER BY l.source_rank ASC, l.record_id ASC
|
||||
LIMIT 20;
|
||||
```
|
||||
|
||||
### 3. 兼容查法:按设备记录聚合查看
|
||||
|
||||
如果历史排查逻辑需要“一台设备一行”,可以继续查:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
record_id,
|
||||
device_name,
|
||||
brand,
|
||||
manufacturer_brand,
|
||||
parent_brand,
|
||||
market_brand,
|
||||
device_type,
|
||||
source_file,
|
||||
section,
|
||||
source_rank,
|
||||
source_weight,
|
||||
aliases_json
|
||||
FROM mobilemodels.mm_device_record
|
||||
WHERE record_id = ?
|
||||
LIMIT 1;
|
||||
```
|
||||
|
||||
## Migration Advice
|
||||
|
||||
建议迁移顺序:
|
||||
|
||||
1. 新增接入直接使用 `mobilemodels.mm_device_catalog`
|
||||
2. 旧查询先保持 `mm_device_lookup` / `python_services_test.models` 不变
|
||||
3. 待业务侧完成字段适配后,再逐步切到主表
|
||||
|
||||
这样可以做到:
|
||||
|
||||
- 数据底层只维护一张设备实体表
|
||||
- 现有查询链路不需要同时改造
|
||||
- 新能力统一围绕主表扩展
|
||||
|
||||
## Performance Strategy
|
||||
|
||||
要点只有四个:
|
||||
|
||||
1. 只允许等值查询 `alias_norm`
|
||||
- 禁止 `%xxx%` 这类模糊查
|
||||
2. 查询主表用 `mm_device_catalog`
|
||||
- 兼容链路走 `mm_device_lookup` / `models` 视图
|
||||
3. 给三方只读账号
|
||||
- 只开放 `SELECT`
|
||||
4. 最好放只读实例
|
||||
- 不让三方流量影响同步和管理操作
|
||||
|
||||
在当前数据规模下,`alias_norm` 命中是很轻的查询。
|
||||
|
||||
## Update Flow
|
||||
|
||||
建议每日全量刷新一次,不做增量。
|
||||
|
||||
原因:
|
||||
|
||||
- 数据量小
|
||||
- 全量替换更稳定
|
||||
- 不容易出现脏数据和漏删
|
||||
|
||||
建议流程:
|
||||
|
||||
1. 拉取上游原始数据
|
||||
2. 生成最新 `device_index.json`
|
||||
3. 导出 MySQL seed SQL
|
||||
4. 在 MySQL 中执行:
|
||||
- `DELETE FROM mm_device_catalog`
|
||||
- `DELETE FROM mm_brand_lookup`
|
||||
- 批量插入新数据
|
||||
5. 完成后切换读流量
|
||||
|
||||
## Security
|
||||
|
||||
如果一定让三方直连 MySQL,至少要做这些限制:
|
||||
|
||||
- 单独只读账号
|
||||
- IP 白名单
|
||||
- 只授权 `mobilemodels` 和兼容需要的 `python_services_test` schema
|
||||
- 不开放 DDL / DML
|
||||
- 连接数限制
|
||||
- 查询超时限制
|
||||
|
||||
更稳妥的方式仍然是:
|
||||
|
||||
- 三方查你们自己的只读网关
|
||||
- 网关内部查 MySQL
|
||||
|
||||
但如果现阶段必须直连库,上面的三张表已经足够支撑。
|
||||
|
||||
## Files
|
||||
|
||||
- Schema: `sql/mobilemodels_mysql_schema.sql`
|
||||
- Seed exporter: `tools/export_mysql_seed.py`
|
||||
|
||||
生成 seed:
|
||||
|
||||
```bash
|
||||
python3 tools/export_mysql_seed.py
|
||||
```
|
||||
|
||||
默认输出:
|
||||
|
||||
- `dist/mobilemodels_mysql_seed.sql`
|
||||
217
sql/mobilemodels_mysql_schema.sql
Normal file
217
sql/mobilemodels_mysql_schema.sql
Normal file
@@ -0,0 +1,217 @@
|
||||
CREATE DATABASE IF NOT EXISTS `mobilemodels`
|
||||
DEFAULT CHARACTER SET utf8mb4
|
||||
DEFAULT COLLATE utf8mb4_0900_ai_ci;
|
||||
|
||||
CREATE DATABASE IF NOT EXISTS `python_services_test`
|
||||
DEFAULT CHARACTER SET utf8mb4
|
||||
DEFAULT COLLATE utf8mb4_0900_ai_ci;
|
||||
|
||||
DROP VIEW IF EXISTS `python_services_test`.`models`;
|
||||
|
||||
USE `mobilemodels`;
|
||||
|
||||
SET @drop_stmt = (
|
||||
SELECT CASE `TABLE_TYPE`
|
||||
WHEN 'BASE TABLE' THEN 'DROP TABLE `mm_device_record`'
|
||||
WHEN 'VIEW' THEN 'DROP VIEW `mm_device_record`'
|
||||
ELSE 'DO 0'
|
||||
END
|
||||
FROM `information_schema`.`TABLES`
|
||||
WHERE `TABLE_SCHEMA` = 'mobilemodels' AND `TABLE_NAME` = 'mm_device_record'
|
||||
LIMIT 1
|
||||
);
|
||||
SET @drop_stmt = COALESCE(@drop_stmt, 'DO 0');
|
||||
PREPARE stmt FROM @drop_stmt;
|
||||
EXECUTE stmt;
|
||||
DEALLOCATE PREPARE stmt;
|
||||
|
||||
SET @drop_stmt = (
|
||||
SELECT CASE `TABLE_TYPE`
|
||||
WHEN 'BASE TABLE' THEN 'DROP TABLE `mm_device_lookup`'
|
||||
WHEN 'VIEW' THEN 'DROP VIEW `mm_device_lookup`'
|
||||
ELSE 'DO 0'
|
||||
END
|
||||
FROM `information_schema`.`TABLES`
|
||||
WHERE `TABLE_SCHEMA` = 'mobilemodels' AND `TABLE_NAME` = 'mm_device_lookup'
|
||||
LIMIT 1
|
||||
);
|
||||
SET @drop_stmt = COALESCE(@drop_stmt, 'DO 0');
|
||||
PREPARE stmt FROM @drop_stmt;
|
||||
EXECUTE stmt;
|
||||
DEALLOCATE PREPARE stmt;
|
||||
|
||||
SET @drop_stmt = (
|
||||
SELECT CASE `TABLE_TYPE`
|
||||
WHEN 'BASE TABLE' THEN 'DROP TABLE `models`'
|
||||
WHEN 'VIEW' THEN 'DROP VIEW `models`'
|
||||
ELSE 'DO 0'
|
||||
END
|
||||
FROM `information_schema`.`TABLES`
|
||||
WHERE `TABLE_SCHEMA` = 'mobilemodels' AND `TABLE_NAME` = 'models'
|
||||
LIMIT 1
|
||||
);
|
||||
SET @drop_stmt = COALESCE(@drop_stmt, 'DO 0');
|
||||
PREPARE stmt FROM @drop_stmt;
|
||||
EXECUTE stmt;
|
||||
DEALLOCATE PREPARE stmt;
|
||||
|
||||
DROP VIEW IF EXISTS `vw_mm_device_lookup`;
|
||||
DROP VIEW IF EXISTS `vw_models`;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `mm_device_catalog` (
|
||||
`record_id` varchar(64) NOT NULL,
|
||||
`model` varchar(191) NOT NULL,
|
||||
`alias_norm` varchar(191) NOT NULL,
|
||||
`device_name` varchar(255) NOT NULL,
|
||||
`brand` varchar(64) NOT NULL,
|
||||
`manufacturer_brand` varchar(64) NOT NULL,
|
||||
`parent_brand` varchar(64) NOT NULL,
|
||||
`market_brand` varchar(64) NOT NULL,
|
||||
`device_type` enum('phone','tablet','wear','tv','other') NOT NULL,
|
||||
`code` varchar(64) DEFAULT NULL,
|
||||
`code_alias` varchar(255) DEFAULT NULL,
|
||||
`ver_name` text DEFAULT NULL,
|
||||
`source_file` varchar(255) NOT NULL,
|
||||
`section` varchar(255) NOT NULL,
|
||||
`source_rank` int NOT NULL,
|
||||
`source_weight` decimal(6,3) NOT NULL,
|
||||
`updated_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||
`hash_md5` char(32) GENERATED ALWAYS AS (
|
||||
md5(concat_ws(_utf8mb4'|', `model`, `device_type`, `market_brand`, `manufacturer_brand`, `code`, `code_alias`, `device_name`, `ver_name`))
|
||||
) STORED,
|
||||
`hash_crc` int unsigned GENERATED ALWAYS AS (
|
||||
crc32(concat_ws(_utf8mb4'|', `model`, `device_type`, `market_brand`, `manufacturer_brand`, `code`, `code_alias`, `device_name`, `ver_name`))
|
||||
) STORED,
|
||||
PRIMARY KEY (`record_id`, `model`),
|
||||
KEY `idx_mm_device_catalog_alias_norm` (`alias_norm`, `source_rank`, `record_id`),
|
||||
KEY `idx_mm_device_catalog_model` (`model`),
|
||||
KEY `idx_mm_device_catalog_market_brand` (`market_brand`),
|
||||
KEY `idx_mm_device_catalog_parent_brand` (`parent_brand`),
|
||||
KEY `idx_mm_device_catalog_manufacturer_brand` (`manufacturer_brand`),
|
||||
KEY `idx_mm_device_catalog_device_type` (`device_type`),
|
||||
KEY `idx_mm_device_catalog_code` (`code`),
|
||||
KEY `idx_mm_device_catalog_hash` (`hash_md5`, `hash_crc`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `mm_brand_lookup` (
|
||||
`alias_norm` varchar(191) NOT NULL,
|
||||
`alias_type` enum('manufacturer','parent','market') NOT NULL,
|
||||
`canonical_brand` varchar(64) NOT NULL,
|
||||
`manufacturer_brand` varchar(64) DEFAULT NULL,
|
||||
`parent_brand` varchar(64) DEFAULT NULL,
|
||||
`market_brand` varchar(64) DEFAULT NULL,
|
||||
`updated_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (`alias_norm`, `alias_type`),
|
||||
KEY `idx_mm_brand_lookup_canonical_brand` (`canonical_brand`),
|
||||
KEY `idx_mm_brand_lookup_manufacturer_brand` (`manufacturer_brand`),
|
||||
KEY `idx_mm_brand_lookup_parent_brand` (`parent_brand`),
|
||||
KEY `idx_mm_brand_lookup_market_brand` (`market_brand`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
|
||||
|
||||
CREATE OR REPLACE VIEW `mm_device_lookup` AS
|
||||
SELECT
|
||||
c.`alias_norm`,
|
||||
c.`record_id`,
|
||||
c.`device_name`,
|
||||
c.`brand`,
|
||||
c.`manufacturer_brand`,
|
||||
c.`parent_brand`,
|
||||
c.`market_brand`,
|
||||
c.`device_type`,
|
||||
c.`source_file`,
|
||||
c.`section`,
|
||||
c.`source_rank`,
|
||||
c.`source_weight`,
|
||||
c.`updated_at`
|
||||
FROM `mm_device_catalog` AS c;
|
||||
|
||||
CREATE OR REPLACE VIEW `mm_device_record` AS
|
||||
SELECT
|
||||
c.`record_id`,
|
||||
c.`device_name`,
|
||||
c.`brand`,
|
||||
c.`manufacturer_brand`,
|
||||
c.`parent_brand`,
|
||||
c.`market_brand`,
|
||||
c.`device_type`,
|
||||
c.`source_file`,
|
||||
c.`section`,
|
||||
c.`source_rank`,
|
||||
c.`source_weight`,
|
||||
CAST(CONCAT('[', GROUP_CONCAT(JSON_QUOTE(c.`model`) ORDER BY c.`model` SEPARATOR ','), ']') AS JSON) AS `aliases_json`,
|
||||
MAX(c.`updated_at`) AS `updated_at`
|
||||
FROM `mm_device_catalog` AS c
|
||||
GROUP BY
|
||||
c.`record_id`,
|
||||
c.`device_name`,
|
||||
c.`brand`,
|
||||
c.`manufacturer_brand`,
|
||||
c.`parent_brand`,
|
||||
c.`market_brand`,
|
||||
c.`device_type`,
|
||||
c.`source_file`,
|
||||
c.`section`,
|
||||
c.`source_rank`,
|
||||
c.`source_weight`;
|
||||
|
||||
CREATE OR REPLACE VIEW `vw_mm_device_lookup` AS
|
||||
SELECT
|
||||
c.`alias_norm`,
|
||||
c.`record_id`,
|
||||
c.`device_name`,
|
||||
c.`brand`,
|
||||
c.`manufacturer_brand`,
|
||||
c.`parent_brand`,
|
||||
c.`market_brand`,
|
||||
c.`device_type`,
|
||||
c.`source_file`,
|
||||
c.`section`,
|
||||
c.`source_rank`,
|
||||
c.`source_weight`,
|
||||
c.`updated_at`
|
||||
FROM `mm_device_catalog` AS c;
|
||||
|
||||
CREATE OR REPLACE VIEW `models` AS
|
||||
SELECT
|
||||
c.`model`,
|
||||
c.`device_type` AS `dtype`,
|
||||
c.`market_brand` AS `brand`,
|
||||
c.`manufacturer_brand` AS `brand_title`,
|
||||
c.`code`,
|
||||
c.`code_alias`,
|
||||
c.`device_name` AS `model_name`,
|
||||
c.`ver_name`,
|
||||
c.`updated_at` AS `update_at`,
|
||||
c.`hash_md5`,
|
||||
c.`hash_crc`
|
||||
FROM `mm_device_catalog` AS c;
|
||||
|
||||
CREATE OR REPLACE VIEW `vw_models` AS
|
||||
SELECT
|
||||
c.`model`,
|
||||
c.`device_type` AS `dtype`,
|
||||
c.`market_brand` AS `brand`,
|
||||
c.`manufacturer_brand` AS `brand_title`,
|
||||
c.`code`,
|
||||
c.`code_alias`,
|
||||
c.`device_name` AS `model_name`,
|
||||
c.`ver_name`,
|
||||
c.`updated_at` AS `update_at`,
|
||||
c.`hash_md5`,
|
||||
c.`hash_crc`
|
||||
FROM `mm_device_catalog` AS c;
|
||||
|
||||
CREATE OR REPLACE VIEW `python_services_test`.`models` AS
|
||||
SELECT
|
||||
`model`,
|
||||
`dtype`,
|
||||
`brand`,
|
||||
`brand_title`,
|
||||
`code`,
|
||||
`code_alias`,
|
||||
`model_name`,
|
||||
`ver_name`,
|
||||
`update_at`,
|
||||
`hash_md5`,
|
||||
`hash_crc`
|
||||
FROM `mobilemodels`.`models`;
|
||||
@@ -25,7 +25,7 @@ python3 tools/device_mapper.py find --name 'L55M5-AD' --brand Xiaomi
|
||||
- `brand`: normalized brand
|
||||
- `manufacturer_brand`: manufacturer-level brand (e.g. `Xiaomi`)
|
||||
- `market_brand`: market sub-brand (e.g. `Xiaomi` / `Redmi` / `POCO`)
|
||||
- `device_type`: `phone | tablet | tv | other`
|
||||
- `device_type`: `phone | tablet | wear | tv | other`
|
||||
- `aliases`: all known searchable aliases
|
||||
- `lookup`: normalized alias -> candidate `record.id[]`
|
||||
- `brand_aliases`: normalized brand aliases to filter by app-provided brand
|
||||
@@ -59,6 +59,7 @@ Supported categories:
|
||||
|
||||
- `phone`
|
||||
- `tablet`
|
||||
- `wear`
|
||||
- `tv`
|
||||
- `other`
|
||||
|
||||
|
||||
12
tools/container_start.sh
Normal file
12
tools/container_start.sh
Normal file
@@ -0,0 +1,12 @@
|
||||
#!/bin/sh
|
||||
set -eu
|
||||
|
||||
cd /app
|
||||
|
||||
sh tools/init_runtime_data.sh
|
||||
|
||||
python3 tools/device_mapper.py build
|
||||
python3 tools/export_mysql_seed.py
|
||||
python3 tools/load_mysql_seed.py
|
||||
|
||||
exec python3 tools/web_server.py --host 0.0.0.0 --port 8123
|
||||
@@ -68,7 +68,7 @@ FILE_BRAND_MAP: Dict[str, str] = {
|
||||
FILE_DEFAULT_DEVICE_TYPE: Dict[str, str] = {
|
||||
"mitv_cn": "tv",
|
||||
"mitv_global_en": "tv",
|
||||
"xiaomi-wear": "other",
|
||||
"xiaomi-wear": "wear",
|
||||
"apple_all": "phone",
|
||||
"apple_all_en": "phone",
|
||||
"apple_cn": "phone",
|
||||
@@ -159,23 +159,29 @@ TABLET_KEYWORDS = [
|
||||
"平板",
|
||||
"matepad",
|
||||
]
|
||||
OTHER_KEYWORDS = [
|
||||
WEAR_KEYWORDS = [
|
||||
"watch",
|
||||
"smartwatch",
|
||||
"手表",
|
||||
"手环",
|
||||
"band",
|
||||
"wear",
|
||||
"wearable",
|
||||
"buds",
|
||||
"earbuds",
|
||||
"耳机",
|
||||
"tws",
|
||||
"eyewear",
|
||||
"glasses",
|
||||
"眼镜",
|
||||
]
|
||||
OTHER_KEYWORDS = [
|
||||
"matebook",
|
||||
"笔记本",
|
||||
"laptop",
|
||||
"notebook",
|
||||
"vision",
|
||||
"vr",
|
||||
"glass",
|
||||
"眼镜",
|
||||
"ipod",
|
||||
"airpods",
|
||||
]
|
||||
@@ -290,6 +296,8 @@ def infer_device_type(
|
||||
return "tv"
|
||||
if has_keyword(corpus, TABLET_KEYWORDS):
|
||||
return "tablet"
|
||||
if has_keyword(corpus, WEAR_KEYWORDS):
|
||||
return "wear"
|
||||
if has_keyword(corpus, OTHER_KEYWORDS):
|
||||
return "other"
|
||||
if has_keyword(corpus, PHONE_KEYWORDS):
|
||||
|
||||
281
tools/export_mysql_seed.py
Normal file
281
tools/export_mysql_seed.py
Normal file
@@ -0,0 +1,281 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Export MobileModels records into MySQL-friendly seed SQL."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Iterable
|
||||
|
||||
from device_mapper import (
|
||||
MARKET_BRAND_ALIASES,
|
||||
MARKET_BRAND_TO_MANUFACTURER,
|
||||
build_records,
|
||||
brand_aliases,
|
||||
normalize_text,
|
||||
resolve_parent_brand,
|
||||
)
|
||||
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||
LEGACY_CODE_RE = re.compile(r"^[A-Za-z0-9][A-Za-z0-9,._/+\\-]{1,63}$")
|
||||
|
||||
|
||||
def is_cn_source_file(source_file: str) -> bool:
|
||||
return source_file.endswith("_cn.md")
|
||||
|
||||
|
||||
def build_source_order(records: list[object]) -> list[str]:
|
||||
source_files = sorted({record.source_file for record in records})
|
||||
cn = [source for source in source_files if is_cn_source_file(source)]
|
||||
other = [source for source in source_files if not is_cn_source_file(source)]
|
||||
return sorted(cn) + sorted(other)
|
||||
|
||||
|
||||
def build_source_weights(records: list[object]) -> tuple[dict[str, int], dict[str, float]]:
|
||||
order = build_source_order(records)
|
||||
total = len(order)
|
||||
rank_map: dict[str, int] = {}
|
||||
weight_map: dict[str, float] = {}
|
||||
|
||||
for idx, source_file in enumerate(order):
|
||||
rank = idx + 1
|
||||
weight = (((total - idx) / total) * 6) if total > 1 else 6
|
||||
rank_map[source_file] = rank
|
||||
weight_map[source_file] = round(weight, 3)
|
||||
|
||||
return rank_map, weight_map
|
||||
|
||||
|
||||
def sql_quote(value: object | None) -> str:
|
||||
if value is None:
|
||||
return "NULL"
|
||||
if isinstance(value, bool):
|
||||
return "1" if value else "0"
|
||||
if isinstance(value, (int, float)):
|
||||
return str(value)
|
||||
text = str(value)
|
||||
text = text.replace("\\", "\\\\").replace("'", "\\'")
|
||||
return f"'{text}'"
|
||||
|
||||
|
||||
def batched(items: list[tuple[str, ...]], batch_size: int) -> Iterable[list[tuple[str, ...]]]:
|
||||
for start in range(0, len(items), batch_size):
|
||||
yield items[start:start + batch_size]
|
||||
|
||||
|
||||
def build_catalog_rows(records: list[object]) -> list[tuple[str, ...]]:
|
||||
rank_map, weight_map = build_source_weights(records)
|
||||
rows = []
|
||||
seen_keys: set[tuple[str, str]] = set()
|
||||
for record in records:
|
||||
aliases = sorted({alias.strip() for alias in record.aliases if alias.strip()})
|
||||
code_aliases = [alias for alias in aliases if is_legacy_code_alias(alias)]
|
||||
primary_code = code_aliases[0] if code_aliases else None
|
||||
other_codes = [alias for alias in code_aliases if alias != primary_code]
|
||||
code_alias = " | ".join(other_codes) if other_codes else None
|
||||
version_names = [alias for alias in aliases if not is_legacy_code_alias(alias)]
|
||||
ver_name = " | ".join(version_names) if version_names else None
|
||||
|
||||
for alias in aliases:
|
||||
alias_norm = normalize_text(alias)
|
||||
if not alias_norm:
|
||||
continue
|
||||
dedupe_key = (record.id, alias_norm)
|
||||
if dedupe_key in seen_keys:
|
||||
continue
|
||||
seen_keys.add(dedupe_key)
|
||||
rows.append((
|
||||
sql_quote(record.id),
|
||||
sql_quote(alias),
|
||||
sql_quote(alias_norm),
|
||||
sql_quote(record.device_name),
|
||||
sql_quote(record.brand),
|
||||
sql_quote(record.manufacturer_brand),
|
||||
sql_quote(record.parent_brand),
|
||||
sql_quote(record.market_brand),
|
||||
sql_quote(record.device_type),
|
||||
sql_quote(primary_code),
|
||||
sql_quote(code_alias),
|
||||
sql_quote(ver_name),
|
||||
sql_quote(record.source_file),
|
||||
sql_quote(record.section),
|
||||
sql_quote(rank_map[record.source_file]),
|
||||
sql_quote(f"{weight_map[record.source_file]:.3f}"),
|
||||
))
|
||||
|
||||
rows.sort(key=lambda item: (item[2], item[14], item[0], item[1]))
|
||||
return rows
|
||||
|
||||
|
||||
def build_brand_rows(records: list[object]) -> list[tuple[str, ...]]:
|
||||
manufacturer_brands = sorted({record.manufacturer_brand for record in records})
|
||||
parent_brands = sorted({record.parent_brand for record in records})
|
||||
rows: dict[tuple[str, str], tuple[str, ...]] = {}
|
||||
|
||||
for brand in manufacturer_brands:
|
||||
parent_brand = resolve_parent_brand(brand)
|
||||
for alias in brand_aliases(brand):
|
||||
alias_norm = normalize_text(alias)
|
||||
if not alias_norm:
|
||||
continue
|
||||
rows[(alias_norm, "manufacturer")] = (
|
||||
sql_quote(alias_norm),
|
||||
sql_quote("manufacturer"),
|
||||
sql_quote(brand),
|
||||
sql_quote(brand),
|
||||
sql_quote(parent_brand),
|
||||
sql_quote(None),
|
||||
)
|
||||
|
||||
for brand in parent_brands:
|
||||
for alias in brand_aliases(brand):
|
||||
alias_norm = normalize_text(alias)
|
||||
if not alias_norm:
|
||||
continue
|
||||
rows[(alias_norm, "parent")] = (
|
||||
sql_quote(alias_norm),
|
||||
sql_quote("parent"),
|
||||
sql_quote(brand),
|
||||
sql_quote(None),
|
||||
sql_quote(brand),
|
||||
sql_quote(None),
|
||||
)
|
||||
|
||||
for market_brand, aliases in MARKET_BRAND_ALIASES.items():
|
||||
manufacturer_brand = MARKET_BRAND_TO_MANUFACTURER.get(market_brand, market_brand)
|
||||
parent_brand = resolve_parent_brand(manufacturer_brand)
|
||||
for alias in sorted(set([market_brand, *aliases])):
|
||||
alias_norm = normalize_text(alias)
|
||||
if not alias_norm:
|
||||
continue
|
||||
rows[(alias_norm, "market")] = (
|
||||
sql_quote(alias_norm),
|
||||
sql_quote("market"),
|
||||
sql_quote(market_brand),
|
||||
sql_quote(manufacturer_brand),
|
||||
sql_quote(parent_brand),
|
||||
sql_quote(market_brand),
|
||||
)
|
||||
|
||||
return [rows[key] for key in sorted(rows)]
|
||||
|
||||
|
||||
def is_legacy_code_alias(text: str) -> bool:
|
||||
value = (text or "").strip()
|
||||
if not value or not LEGACY_CODE_RE.match(value):
|
||||
return False
|
||||
return any(ch.isdigit() for ch in value)
|
||||
|
||||
|
||||
def append_insert_block(lines: list[str], table_name: str, columns: list[str], rows: list[tuple[str, ...]], batch_size: int = 500) -> None:
|
||||
if not rows:
|
||||
return
|
||||
|
||||
column_sql = ", ".join(f"`{column}`" for column in columns)
|
||||
for chunk in batched(rows, batch_size):
|
||||
values_sql = ",\n".join(f" ({', '.join(row)})" for row in chunk)
|
||||
lines.append(f"INSERT INTO `{table_name}` ({column_sql}) VALUES\n{values_sql};")
|
||||
lines.append("")
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Export MobileModels MySQL seed SQL.")
|
||||
parser.add_argument(
|
||||
"--repo-root",
|
||||
type=Path,
|
||||
default=REPO_ROOT,
|
||||
help="Path to MobileModels repository root",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=Path,
|
||||
default=Path("dist/mobilemodels_mysql_seed.sql"),
|
||||
help="Output SQL path",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
repo_root = args.repo_root.resolve()
|
||||
output_path = args.output if args.output.is_absolute() else repo_root / args.output
|
||||
|
||||
records = build_records(repo_root)
|
||||
device_record_count = len(records)
|
||||
catalog_rows = build_catalog_rows(records)
|
||||
brand_rows = build_brand_rows(records)
|
||||
|
||||
lines = [
|
||||
"-- MobileModels MySQL seed",
|
||||
"-- Generated by tools/export_mysql_seed.py",
|
||||
"USE `mobilemodels`;",
|
||||
"",
|
||||
"START TRANSACTION;",
|
||||
"",
|
||||
"DELETE FROM `mm_device_catalog`;",
|
||||
"DELETE FROM `mm_brand_lookup`;",
|
||||
"",
|
||||
]
|
||||
append_insert_block(
|
||||
lines,
|
||||
"mm_device_catalog",
|
||||
[
|
||||
"record_id",
|
||||
"model",
|
||||
"alias_norm",
|
||||
"device_name",
|
||||
"brand",
|
||||
"manufacturer_brand",
|
||||
"parent_brand",
|
||||
"market_brand",
|
||||
"device_type",
|
||||
"code",
|
||||
"code_alias",
|
||||
"ver_name",
|
||||
"source_file",
|
||||
"section",
|
||||
"source_rank",
|
||||
"source_weight",
|
||||
],
|
||||
catalog_rows,
|
||||
)
|
||||
append_insert_block(
|
||||
lines,
|
||||
"mm_brand_lookup",
|
||||
[
|
||||
"alias_norm",
|
||||
"alias_type",
|
||||
"canonical_brand",
|
||||
"manufacturer_brand",
|
||||
"parent_brand",
|
||||
"market_brand",
|
||||
],
|
||||
brand_rows,
|
||||
)
|
||||
|
||||
lines.extend([
|
||||
"COMMIT;",
|
||||
"",
|
||||
f"-- device_records: {device_record_count}",
|
||||
f"-- device_catalog_rows: {len(catalog_rows)}",
|
||||
f"-- device_lookup_rows: {len(catalog_rows)}",
|
||||
f"-- brand_lookup_rows: {len(brand_rows)}",
|
||||
f"-- legacy_models_rows: {len(catalog_rows)}",
|
||||
"",
|
||||
])
|
||||
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
output_path.write_text("\n".join(lines), encoding="utf-8")
|
||||
print(f"Exported MySQL seed: {output_path}")
|
||||
print(f"device_records={device_record_count}")
|
||||
print(f"device_catalog_rows={len(catalog_rows)}")
|
||||
print(f"device_lookup_rows={len(catalog_rows)}")
|
||||
print(f"brand_lookup_rows={len(brand_rows)}")
|
||||
print(f"legacy_models_rows={len(catalog_rows)}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
73
tools/init_runtime_data.sh
Normal file
73
tools/init_runtime_data.sh
Normal file
@@ -0,0 +1,73 @@
|
||||
#!/bin/sh
|
||||
set -eu
|
||||
|
||||
APP_ROOT="${APP_ROOT:-/app}"
|
||||
DATA_ROOT="${MOBILEMODELS_DATA_ROOT:-/data}"
|
||||
|
||||
mkdir -p "$DATA_ROOT" "$DATA_ROOT/state"
|
||||
|
||||
sync_missing_dir_entries() {
|
||||
src_dir="$1"
|
||||
dst_dir="$2"
|
||||
|
||||
mkdir -p "$dst_dir"
|
||||
|
||||
for src_entry in "$src_dir"/*; do
|
||||
[ -e "$src_entry" ] || continue
|
||||
name="$(basename "$src_entry")"
|
||||
dst_entry="$dst_dir/$name"
|
||||
|
||||
if [ -d "$src_entry" ]; then
|
||||
sync_missing_dir_entries "$src_entry" "$dst_entry"
|
||||
continue
|
||||
fi
|
||||
|
||||
if [ ! -e "$dst_entry" ] && [ ! -L "$dst_entry" ]; then
|
||||
mkdir -p "$(dirname "$dst_entry")"
|
||||
cp -a "$src_entry" "$dst_entry"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
init_path() {
|
||||
rel_path="$1"
|
||||
src_path="$APP_ROOT/$rel_path"
|
||||
dst_path="$DATA_ROOT/$rel_path"
|
||||
|
||||
if [ -d "$src_path" ]; then
|
||||
if [ ! -e "$dst_path" ] && [ ! -L "$dst_path" ]; then
|
||||
mkdir -p "$(dirname "$dst_path")"
|
||||
cp -a "$src_path" "$dst_path"
|
||||
else
|
||||
sync_missing_dir_entries "$src_path" "$dst_path"
|
||||
fi
|
||||
elif [ ! -e "$dst_path" ] && [ ! -L "$dst_path" ]; then
|
||||
mkdir -p "$(dirname "$dst_path")"
|
||||
cp -a "$src_path" "$dst_path"
|
||||
fi
|
||||
|
||||
if [ -L "$src_path" ]; then
|
||||
current_target="$(readlink "$src_path" || true)"
|
||||
if [ "$current_target" = "$dst_path" ]; then
|
||||
return
|
||||
fi
|
||||
rm -f "$src_path"
|
||||
else
|
||||
rm -rf "$src_path"
|
||||
fi
|
||||
|
||||
ln -s "$dst_path" "$src_path"
|
||||
}
|
||||
|
||||
for rel_path in \
|
||||
brands \
|
||||
misc \
|
||||
dist \
|
||||
README.md \
|
||||
README_en.md \
|
||||
CHANGELOG.md \
|
||||
CHANGELOG_en.md \
|
||||
LICENSE.txt
|
||||
do
|
||||
init_path "$rel_path"
|
||||
done
|
||||
165
tools/load_mysql_seed.py
Normal file
165
tools/load_mysql_seed.py
Normal file
@@ -0,0 +1,165 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Load MobileModels schema and seed data into MySQL."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||
|
||||
|
||||
def mysql_env(password: str) -> dict[str, str]:
|
||||
env = os.environ.copy()
|
||||
env["MYSQL_PWD"] = password
|
||||
return env
|
||||
|
||||
|
||||
def mysql_command(user: str, host: str, port: int, database: str | None = None) -> list[str]:
|
||||
command = [
|
||||
"mysql",
|
||||
f"--host={host}",
|
||||
f"--port={port}",
|
||||
f"--user={user}",
|
||||
"--protocol=TCP",
|
||||
"--default-character-set=utf8mb4",
|
||||
]
|
||||
if database:
|
||||
command.append(database)
|
||||
return command
|
||||
|
||||
|
||||
def mysqladmin_ping(user: str, password: str, host: str, port: int) -> bool:
|
||||
proc = subprocess.run(
|
||||
[
|
||||
"mysqladmin",
|
||||
f"--host={host}",
|
||||
f"--port={port}",
|
||||
f"--user={user}",
|
||||
"--protocol=TCP",
|
||||
"ping",
|
||||
"--silent",
|
||||
],
|
||||
env=mysql_env(password),
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
return proc.returncode == 0
|
||||
|
||||
|
||||
def wait_for_mysql(user: str, password: str, host: str, port: int, timeout: int) -> None:
|
||||
deadline = time.time() + timeout
|
||||
while time.time() < deadline:
|
||||
if mysqladmin_ping(user, password, host, port):
|
||||
return
|
||||
time.sleep(2)
|
||||
raise RuntimeError(f"MySQL 未在 {timeout}s 内就绪: {host}:{port}")
|
||||
|
||||
|
||||
def run_sql_file(user: str, password: str, host: str, port: int, path: Path, database: str | None = None) -> None:
|
||||
sql_text = path.read_text(encoding="utf-8")
|
||||
proc = subprocess.run(
|
||||
mysql_command(user, host, port, database=database),
|
||||
env=mysql_env(password),
|
||||
input=sql_text,
|
||||
text=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
check=False,
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
message = proc.stderr.strip() or proc.stdout.strip() or f"mysql exited with {proc.returncode}"
|
||||
raise RuntimeError(f"执行 SQL 文件失败 {path}: {message}")
|
||||
|
||||
|
||||
def sql_string(value: str) -> str:
|
||||
return value.replace("\\", "\\\\").replace("'", "''")
|
||||
|
||||
|
||||
def ensure_reader_user(
|
||||
user: str,
|
||||
password: str,
|
||||
host: str,
|
||||
port: int,
|
||||
database: str,
|
||||
reader_user: str,
|
||||
reader_password: str,
|
||||
) -> None:
|
||||
sql = f"""
|
||||
CREATE USER IF NOT EXISTS '{sql_string(reader_user)}'@'%' IDENTIFIED BY '{sql_string(reader_password)}';
|
||||
ALTER USER '{sql_string(reader_user)}'@'%' IDENTIFIED BY '{sql_string(reader_password)}';
|
||||
GRANT SELECT ON `{database}`.* TO '{sql_string(reader_user)}'@'%';
|
||||
GRANT SELECT ON `python_services_test`.* TO '{sql_string(reader_user)}'@'%';
|
||||
FLUSH PRIVILEGES;
|
||||
"""
|
||||
proc = subprocess.run(
|
||||
mysql_command(user, host, port),
|
||||
env=mysql_env(password),
|
||||
input=sql,
|
||||
text=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
check=False,
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
message = proc.stderr.strip() or proc.stdout.strip() or f"mysql exited with {proc.returncode}"
|
||||
raise RuntimeError(f"创建只读账号失败: {message}")
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Load MobileModels schema and seed data into MySQL.")
|
||||
parser.add_argument("--schema", type=Path, default=Path("sql/mobilemodels_mysql_schema.sql"))
|
||||
parser.add_argument("--seed", type=Path, default=Path("dist/mobilemodels_mysql_seed.sql"))
|
||||
parser.add_argument("--host", default=os.environ.get("MYSQL_HOST", "mysql"))
|
||||
parser.add_argument("--port", type=int, default=int(os.environ.get("MYSQL_PORT", "3306")))
|
||||
parser.add_argument("--user", default=os.environ.get("MYSQL_ROOT_USER", "root"))
|
||||
parser.add_argument("--password", default=os.environ.get("MYSQL_ROOT_PASSWORD", "mobilemodels_root"))
|
||||
parser.add_argument("--database", default=os.environ.get("MYSQL_DATABASE", "mobilemodels"))
|
||||
parser.add_argument("--reader-user", default=os.environ.get("MYSQL_READER_USER", ""))
|
||||
parser.add_argument("--reader-password", default=os.environ.get("MYSQL_READER_PASSWORD", ""))
|
||||
parser.add_argument("--wait-timeout", type=int, default=120)
|
||||
parser.add_argument("--check-only", action="store_true", help="Only check MySQL readiness")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
schema_path = args.schema if args.schema.is_absolute() else REPO_ROOT / args.schema
|
||||
seed_path = args.seed if args.seed.is_absolute() else REPO_ROOT / args.seed
|
||||
|
||||
wait_for_mysql(args.user, args.password, args.host, args.port, args.wait_timeout)
|
||||
|
||||
if args.check_only:
|
||||
print(f"MySQL ready: {args.host}:{args.port}")
|
||||
return 0
|
||||
|
||||
run_sql_file(args.user, args.password, args.host, args.port, schema_path)
|
||||
run_sql_file(args.user, args.password, args.host, args.port, seed_path)
|
||||
|
||||
if args.reader_user and args.reader_password:
|
||||
ensure_reader_user(
|
||||
args.user,
|
||||
args.password,
|
||||
args.host,
|
||||
args.port,
|
||||
args.database,
|
||||
args.reader_user,
|
||||
args.reader_password,
|
||||
)
|
||||
|
||||
print(f"Loaded schema: {schema_path}")
|
||||
print(f"Loaded seed: {seed_path}")
|
||||
if args.reader_user:
|
||||
print(f"Ensured reader user: {args.reader_user}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
168
tools/sync_upstream_mobilemodels.py
Normal file
168
tools/sync_upstream_mobilemodels.py
Normal file
@@ -0,0 +1,168 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Sync selected upstream MobileModels data into this repository."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import filecmp
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||
DEFAULT_REPO_URL = "https://github.com/KHwang9883/MobileModels.git"
|
||||
DEFAULT_BRANCH = "master"
|
||||
SYNC_PATHS = [
|
||||
"brands",
|
||||
"misc",
|
||||
"README.md",
|
||||
"README_en.md",
|
||||
"CHANGELOG.md",
|
||||
"CHANGELOG_en.md",
|
||||
"LICENSE.txt",
|
||||
]
|
||||
|
||||
|
||||
def run(cmd: list[str], cwd: Path | None = None) -> None:
|
||||
subprocess.run(cmd, cwd=cwd or REPO_ROOT, check=True)
|
||||
|
||||
|
||||
def remove_path(path: Path) -> None:
|
||||
if path.is_dir():
|
||||
shutil.rmtree(path)
|
||||
elif path.exists():
|
||||
path.unlink()
|
||||
|
||||
|
||||
def sync_path(src: Path, dst: Path) -> None:
|
||||
if src.is_dir():
|
||||
dst.mkdir(parents=True, exist_ok=True)
|
||||
source_children = {child.name for child in src.iterdir()}
|
||||
|
||||
for existing in dst.iterdir():
|
||||
if existing.name not in source_children:
|
||||
remove_path(existing)
|
||||
|
||||
for child in src.iterdir():
|
||||
sync_path(child, dst / child.name)
|
||||
return
|
||||
|
||||
dst.parent.mkdir(parents=True, exist_ok=True)
|
||||
if dst.exists() and filecmp.cmp(src, dst, shallow=False):
|
||||
return
|
||||
shutil.copy2(src, dst)
|
||||
|
||||
|
||||
def sync_selected_paths(upstream_root: Path) -> None:
|
||||
for relative_path in SYNC_PATHS:
|
||||
src = upstream_root / relative_path
|
||||
dst = REPO_ROOT / relative_path
|
||||
if not src.exists():
|
||||
raise FileNotFoundError(f"Missing upstream path: {relative_path}")
|
||||
sync_path(src, dst)
|
||||
|
||||
|
||||
def build_index(output_path: str) -> None:
|
||||
run(
|
||||
[
|
||||
sys.executable,
|
||||
str(REPO_ROOT / "tools/device_mapper.py"),
|
||||
"build",
|
||||
"--output",
|
||||
output_path,
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def export_mysql_seed(output_path: str) -> None:
|
||||
run(
|
||||
[
|
||||
sys.executable,
|
||||
str(REPO_ROOT / "tools/export_mysql_seed.py"),
|
||||
"--output",
|
||||
output_path,
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def load_mysql_seed(seed_path: str) -> None:
|
||||
run(
|
||||
[
|
||||
sys.executable,
|
||||
str(REPO_ROOT / "tools/load_mysql_seed.py"),
|
||||
"--seed",
|
||||
seed_path,
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Sync upstream MobileModels raw data and optionally rebuild the device index."
|
||||
)
|
||||
parser.add_argument("--repo-url", default=DEFAULT_REPO_URL, help="Upstream git repository URL")
|
||||
parser.add_argument("--branch", default=DEFAULT_BRANCH, help="Upstream branch to sync from")
|
||||
parser.add_argument(
|
||||
"--build-index",
|
||||
action="store_true",
|
||||
help="Rebuild dist/device_index.json after syncing upstream data",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--index-output",
|
||||
default="dist/device_index.json",
|
||||
help="Output path for the rebuilt device index",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--export-mysql-seed",
|
||||
action="store_true",
|
||||
help="Export MySQL seed SQL after syncing upstream data",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--mysql-seed-output",
|
||||
default="dist/mobilemodels_mysql_seed.sql",
|
||||
help="Output path for the exported MySQL seed SQL",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--load-mysql",
|
||||
action="store_true",
|
||||
help="Load schema and seed data into MySQL after exporting seed SQL",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
|
||||
with tempfile.TemporaryDirectory(prefix="mobilemodels-upstream-") as tmpdir:
|
||||
upstream_root = Path(tmpdir) / "upstream"
|
||||
run(
|
||||
[
|
||||
"git",
|
||||
"clone",
|
||||
"--depth",
|
||||
"1",
|
||||
"--branch",
|
||||
args.branch,
|
||||
args.repo_url,
|
||||
str(upstream_root),
|
||||
]
|
||||
)
|
||||
sync_selected_paths(upstream_root)
|
||||
|
||||
if args.build_index:
|
||||
build_index(args.index_output)
|
||||
|
||||
if args.export_mysql_seed or args.load_mysql:
|
||||
export_mysql_seed(args.mysql_seed_output)
|
||||
|
||||
if args.load_mysql:
|
||||
load_mysql_seed(args.mysql_seed_output)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
347
tools/web_server.py
Normal file
347
tools/web_server.py
Normal file
@@ -0,0 +1,347 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Compose-facing web server for MobileModels static pages and maintenance APIs."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import threading
|
||||
from datetime import datetime
|
||||
from http import HTTPStatus
|
||||
from http.server import SimpleHTTPRequestHandler, ThreadingHTTPServer
|
||||
from pathlib import Path
|
||||
|
||||
from sync_upstream_mobilemodels import DEFAULT_BRANCH, DEFAULT_REPO_URL, REPO_ROOT
|
||||
|
||||
|
||||
SYNC_SCRIPT = REPO_ROOT / "tools/sync_upstream_mobilemodels.py"
|
||||
INDEX_PATH = REPO_ROOT / "dist/device_index.json"
|
||||
MYSQL_SEED_PATH = REPO_ROOT / "dist/mobilemodels_mysql_seed.sql"
|
||||
MYSQL_LOADER = REPO_ROOT / "tools/load_mysql_seed.py"
|
||||
DATA_ROOT = Path(os.environ.get("MOBILEMODELS_DATA_ROOT", "/data"))
|
||||
SYNC_METADATA_PATH = DATA_ROOT / "state/sync_status.json"
|
||||
SYNC_LOCK = threading.Lock()
|
||||
NORMALIZE_RE = re.compile(r"[^0-9a-z\u4e00-\u9fff]+")
|
||||
|
||||
|
||||
def run_command(args: list[str]) -> subprocess.CompletedProcess[str]:
|
||||
return subprocess.run(
|
||||
args,
|
||||
cwd=REPO_ROOT,
|
||||
text=True,
|
||||
capture_output=True,
|
||||
check=False,
|
||||
)
|
||||
|
||||
|
||||
def normalize_text(text: str) -> str:
|
||||
return NORMALIZE_RE.sub("", (text or "").lower())
|
||||
|
||||
|
||||
def sql_string(value: str) -> str:
|
||||
return (value or "").replace("\\", "\\\\").replace("'", "''")
|
||||
|
||||
|
||||
def mysql_command(database: str | None = None) -> list[str]:
|
||||
command = [
|
||||
"mysql",
|
||||
f"--host={os.environ.get('MYSQL_HOST', 'mysql')}",
|
||||
f"--port={os.environ.get('MYSQL_PORT', '3306')}",
|
||||
f"--user={os.environ.get('MYSQL_READER_USER', '')}",
|
||||
"--protocol=TCP",
|
||||
"--default-character-set=utf8mb4",
|
||||
"--batch",
|
||||
"--raw",
|
||||
]
|
||||
if database:
|
||||
command.append(database)
|
||||
return command
|
||||
|
||||
|
||||
def mysql_env() -> dict[str, str]:
|
||||
env = os.environ.copy()
|
||||
env["MYSQL_PWD"] = os.environ.get("MYSQL_READER_PASSWORD", "")
|
||||
return env
|
||||
|
||||
|
||||
def run_mysql_query(sql: str, database: str | None = None) -> list[dict[str, str | None]]:
|
||||
proc = subprocess.run(
|
||||
mysql_command(database=database),
|
||||
env=mysql_env(),
|
||||
input=sql,
|
||||
text=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
check=False,
|
||||
)
|
||||
if proc.returncode != 0:
|
||||
message = proc.stderr.strip() or proc.stdout.strip() or f"mysql exited with {proc.returncode}"
|
||||
raise RuntimeError(message)
|
||||
|
||||
lines = [line for line in proc.stdout.splitlines() if line.strip()]
|
||||
if not lines:
|
||||
return []
|
||||
|
||||
headers = lines[0].split("\t")
|
||||
rows: list[dict[str, str | None]] = []
|
||||
for line in lines[1:]:
|
||||
values = line.split("\t")
|
||||
row = {}
|
||||
for idx, header in enumerate(headers):
|
||||
value = values[idx] if idx < len(values) else ""
|
||||
row[header] = None if value == "NULL" else value
|
||||
rows.append(row)
|
||||
return rows
|
||||
|
||||
|
||||
def build_sql_query_payload(payload: dict[str, object]) -> dict[str, object]:
|
||||
raw_value = str(payload.get("model_raw") or payload.get("model") or "").strip()
|
||||
if not raw_value:
|
||||
raise RuntimeError("请填写设备标识。")
|
||||
|
||||
alias_norm = normalize_text(raw_value)
|
||||
if not alias_norm:
|
||||
raise RuntimeError("设备标识无法归一化,请检查输入。")
|
||||
|
||||
limit_value = payload.get("limit", 20)
|
||||
try:
|
||||
limit = int(limit_value)
|
||||
except Exception as err:
|
||||
raise RuntimeError("limit 必须是数字。") from err
|
||||
limit = max(1, min(limit, 100))
|
||||
|
||||
sql = f"""
|
||||
SELECT
|
||||
model,
|
||||
record_id,
|
||||
alias_norm,
|
||||
device_name,
|
||||
brand,
|
||||
manufacturer_brand,
|
||||
parent_brand,
|
||||
market_brand,
|
||||
device_type,
|
||||
source_file,
|
||||
section,
|
||||
source_rank,
|
||||
source_weight,
|
||||
code,
|
||||
code_alias,
|
||||
ver_name
|
||||
FROM mobilemodels.mm_device_catalog
|
||||
WHERE alias_norm = '{sql_string(alias_norm)}'
|
||||
ORDER BY source_rank ASC, record_id ASC
|
||||
LIMIT {limit};
|
||||
""".strip()
|
||||
|
||||
rows = run_mysql_query(sql)
|
||||
return {
|
||||
"query_mode": "sql",
|
||||
"model_raw": raw_value,
|
||||
"alias_norm": alias_norm,
|
||||
"limit": limit,
|
||||
"sql": sql,
|
||||
"rows": rows,
|
||||
"row_count": len(rows),
|
||||
}
|
||||
|
||||
|
||||
def read_sync_metadata() -> dict[str, object]:
|
||||
if not SYNC_METADATA_PATH.exists():
|
||||
return {}
|
||||
try:
|
||||
return json.loads(SYNC_METADATA_PATH.read_text(encoding="utf-8"))
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
|
||||
def write_sync_metadata(payload: dict[str, object]) -> None:
|
||||
SYNC_METADATA_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
SYNC_METADATA_PATH.write_text(
|
||||
json.dumps(payload, ensure_ascii=False, indent=2),
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
|
||||
def get_status_payload() -> dict[str, object]:
|
||||
index_mtime = None
|
||||
mysql_seed_mtime = None
|
||||
if INDEX_PATH.exists():
|
||||
index_mtime = datetime.fromtimestamp(INDEX_PATH.stat().st_mtime).isoformat(timespec="seconds")
|
||||
if MYSQL_SEED_PATH.exists():
|
||||
mysql_seed_mtime = datetime.fromtimestamp(MYSQL_SEED_PATH.stat().st_mtime).isoformat(timespec="seconds")
|
||||
|
||||
mysql_host = os.environ.get("MYSQL_HOST", "mysql")
|
||||
mysql_port = os.environ.get("MYSQL_PORT", "3306")
|
||||
mysql_database = os.environ.get("MYSQL_DATABASE", "mobilemodels")
|
||||
mysql_reader_user = os.environ.get("MYSQL_READER_USER", "")
|
||||
mysql_reader_password = os.environ.get("MYSQL_READER_PASSWORD", "")
|
||||
mysql_ready = False
|
||||
mysql_status = ""
|
||||
sync_metadata = read_sync_metadata()
|
||||
mysql_proc = run_command(["python3", str(MYSQL_LOADER), "--check-only", "--wait-timeout", "5"])
|
||||
if mysql_proc.returncode == 0:
|
||||
mysql_ready = True
|
||||
mysql_status = mysql_proc.stdout.strip() or "MySQL ready"
|
||||
else:
|
||||
mysql_status = mysql_proc.stderr.strip() or mysql_proc.stdout.strip() or "MySQL unavailable"
|
||||
|
||||
return {
|
||||
"supports_upstream_sync": True,
|
||||
"storage_mode": "docker_volume",
|
||||
"repo_root": str(REPO_ROOT),
|
||||
"data_root": str(DATA_ROOT),
|
||||
"upstream_repo_url": DEFAULT_REPO_URL,
|
||||
"upstream_branch": DEFAULT_BRANCH,
|
||||
"last_sync_time": sync_metadata.get("last_sync_time"),
|
||||
"last_upstream_commit": sync_metadata.get("last_upstream_commit"),
|
||||
"index_file": str(INDEX_PATH.relative_to(REPO_ROOT)),
|
||||
"index_mtime": index_mtime,
|
||||
"mysql_seed_file": str(MYSQL_SEED_PATH.relative_to(REPO_ROOT)),
|
||||
"mysql_seed_mtime": mysql_seed_mtime,
|
||||
"mysql_host": mysql_host,
|
||||
"mysql_port": mysql_port,
|
||||
"mysql_database": mysql_database,
|
||||
"mysql_reader_user": mysql_reader_user,
|
||||
"mysql_reader_password": mysql_reader_password,
|
||||
"mysql_ready": mysql_ready,
|
||||
"mysql_status": mysql_status,
|
||||
}
|
||||
|
||||
|
||||
def run_upstream_sync() -> dict[str, object]:
|
||||
if not SYNC_LOCK.acquire(blocking=False):
|
||||
raise RuntimeError("已有同步任务在执行,请稍后再试。")
|
||||
|
||||
try:
|
||||
upstream_proc = run_command(
|
||||
["git", "ls-remote", DEFAULT_REPO_URL, f"refs/heads/{DEFAULT_BRANCH}"]
|
||||
)
|
||||
upstream_commit = ""
|
||||
if upstream_proc.returncode == 0 and upstream_proc.stdout.strip():
|
||||
upstream_commit = upstream_proc.stdout.split()[0]
|
||||
|
||||
proc = run_command([
|
||||
"python3",
|
||||
str(SYNC_SCRIPT),
|
||||
"--build-index",
|
||||
"--export-mysql-seed",
|
||||
"--load-mysql",
|
||||
])
|
||||
output = "\n".join(
|
||||
part for part in [proc.stdout.strip(), proc.stderr.strip()] if part
|
||||
).strip()
|
||||
|
||||
if proc.returncode != 0:
|
||||
raise RuntimeError(output or f"sync script failed with exit code {proc.returncode}")
|
||||
|
||||
payload = {
|
||||
"storage_mode": "docker_volume",
|
||||
"repo_root": str(REPO_ROOT),
|
||||
"data_root": str(DATA_ROOT),
|
||||
"upstream_repo_url": DEFAULT_REPO_URL,
|
||||
"upstream_branch": DEFAULT_BRANCH,
|
||||
"upstream_commit": upstream_commit,
|
||||
"last_sync_time": datetime.now().isoformat(timespec="seconds"),
|
||||
"last_upstream_commit": upstream_commit,
|
||||
"index_file": str(INDEX_PATH.relative_to(REPO_ROOT)),
|
||||
"index_mtime": datetime.fromtimestamp(INDEX_PATH.stat().st_mtime).isoformat(timespec="seconds")
|
||||
if INDEX_PATH.exists()
|
||||
else None,
|
||||
"mysql_seed_file": str(MYSQL_SEED_PATH.relative_to(REPO_ROOT)),
|
||||
"mysql_seed_mtime": datetime.fromtimestamp(MYSQL_SEED_PATH.stat().st_mtime).isoformat(timespec="seconds")
|
||||
if MYSQL_SEED_PATH.exists()
|
||||
else None,
|
||||
"output": output or "同步脚本执行完成。",
|
||||
}
|
||||
write_sync_metadata({
|
||||
"last_sync_time": payload["last_sync_time"],
|
||||
"last_upstream_commit": payload["last_upstream_commit"],
|
||||
"upstream_repo_url": DEFAULT_REPO_URL,
|
||||
"upstream_branch": DEFAULT_BRANCH,
|
||||
})
|
||||
return payload
|
||||
finally:
|
||||
SYNC_LOCK.release()
|
||||
|
||||
|
||||
class MobileModelsHandler(SimpleHTTPRequestHandler):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, directory=str(REPO_ROOT), **kwargs)
|
||||
|
||||
def guess_type(self, path: str) -> str:
|
||||
content_type = super().guess_type(path)
|
||||
lower_path = path.lower()
|
||||
if lower_path.endswith(".md"):
|
||||
return "text/markdown; charset=utf-8"
|
||||
if lower_path.endswith(".txt"):
|
||||
return "text/plain; charset=utf-8"
|
||||
if content_type.startswith("text/") and "charset=" not in content_type:
|
||||
return f"{content_type}; charset=utf-8"
|
||||
return content_type
|
||||
|
||||
def _send_json(self, payload: dict[str, object], status: int = HTTPStatus.OK) -> None:
|
||||
data = json.dumps(payload, ensure_ascii=False).encode("utf-8")
|
||||
self.send_response(status)
|
||||
self.send_header("Content-Type", "application/json; charset=utf-8")
|
||||
self.send_header("Content-Length", str(len(data)))
|
||||
self.send_header("Cache-Control", "no-store")
|
||||
self.end_headers()
|
||||
self.wfile.write(data)
|
||||
|
||||
def do_GET(self) -> None:
|
||||
if self.path == "/api/status":
|
||||
try:
|
||||
self._send_json(get_status_payload())
|
||||
except Exception as err:
|
||||
self._send_json({"error": str(err)}, status=HTTPStatus.INTERNAL_SERVER_ERROR)
|
||||
return
|
||||
return super().do_GET()
|
||||
|
||||
def do_POST(self) -> None:
|
||||
if self.path == "/api/sync-upstream":
|
||||
try:
|
||||
payload = run_upstream_sync()
|
||||
self._send_json(payload)
|
||||
except RuntimeError as err:
|
||||
status = HTTPStatus.CONFLICT if "已有同步任务" in str(err) else HTTPStatus.INTERNAL_SERVER_ERROR
|
||||
self._send_json({"error": str(err)}, status=status)
|
||||
except Exception as err:
|
||||
self._send_json({"error": str(err)}, status=HTTPStatus.INTERNAL_SERVER_ERROR)
|
||||
return
|
||||
if self.path == "/api/query-sql":
|
||||
try:
|
||||
content_length = int(self.headers.get("Content-Length", "0") or "0")
|
||||
raw_body = self.rfile.read(content_length) if content_length > 0 else b"{}"
|
||||
req = json.loads(raw_body.decode("utf-8") or "{}")
|
||||
payload = build_sql_query_payload(req if isinstance(req, dict) else {})
|
||||
self._send_json(payload)
|
||||
except RuntimeError as err:
|
||||
self._send_json({"error": str(err)}, status=HTTPStatus.BAD_REQUEST)
|
||||
except Exception as err:
|
||||
self._send_json({"error": str(err)}, status=HTTPStatus.INTERNAL_SERVER_ERROR)
|
||||
return
|
||||
|
||||
self._send_json({"error": "Not found"}, status=HTTPStatus.NOT_FOUND)
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Run the MobileModels web server inside Docker Compose.")
|
||||
parser.add_argument("--host", default="127.0.0.1", help="Bind host")
|
||||
parser.add_argument("--port", type=int, default=8123, help="Bind port")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
server = ThreadingHTTPServer((args.host, args.port), MobileModelsHandler)
|
||||
print(f"Serving MobileModels on http://{args.host}:{args.port}")
|
||||
server.serve_forever()
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -5,7 +5,25 @@
|
||||
From repository root:
|
||||
|
||||
```bash
|
||||
python3 -m http.server 8123
|
||||
docker compose up --build -d
|
||||
```
|
||||
|
||||
Optional environment setup:
|
||||
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
Stop:
|
||||
|
||||
```bash
|
||||
docker compose down
|
||||
```
|
||||
|
||||
Reset both MySQL and managed raw/index data:
|
||||
|
||||
```bash
|
||||
docker compose down -v
|
||||
```
|
||||
|
||||
Open:
|
||||
@@ -13,14 +31,35 @@ Open:
|
||||
- http://127.0.0.1:8123/web/device_query.html (设备查询)
|
||||
- http://127.0.0.1:8123/web/brand_management.html (数据管理)
|
||||
|
||||
整个功能栈统一运行在 Docker Compose 中,不再依赖本地 Python 或本地 MySQL 直接启动。
|
||||
|
||||
容器启动时会自动完成:
|
||||
|
||||
- 构建 `dist/device_index.json`
|
||||
- 导出 MySQL seed 文件
|
||||
- 加载 MySQL schema 与 seed 数据
|
||||
- 启动 Web 页面与 API 服务
|
||||
|
||||
容器内还会同时启动 MySQL:
|
||||
|
||||
- `127.0.0.1:3306`
|
||||
- database: `mobilemodels`
|
||||
- reader user: `mobilemodels_reader`
|
||||
|
||||
页面顶部有导航条(Gitea 风格):
|
||||
|
||||
- `设备查询`
|
||||
- `数据管理`
|
||||
|
||||
设备查询页顶部额外提供三个页内 tab:
|
||||
|
||||
- `索引查询`
|
||||
- `SQL 查询`
|
||||
- `相关文档`
|
||||
|
||||
## 设备查询
|
||||
|
||||
### Mode A: 客户端上报模式(新增)
|
||||
### 客户端上报模式
|
||||
|
||||
字段按你给的上报方式输入:
|
||||
|
||||
@@ -30,41 +69,25 @@ Open:
|
||||
- iOS
|
||||
- `platform=ios`
|
||||
- `model_raw=utsname.machine`(例如 `iPhone14,2`)
|
||||
- 鸿蒙
|
||||
- HarmonyOS
|
||||
- `platform=harmony`
|
||||
- `model_raw`(如果有就原样填)
|
||||
- `deviceInfo.marketName`(页面里填 `marketName`)
|
||||
- `deviceInfo.osFullName`(页面里填 `osFullName`)
|
||||
- `model_raw`(例如 `NOH-AL00`)
|
||||
|
||||
说明:
|
||||
|
||||
- 鸿蒙模式下,`model_raw` 和 `marketName` 至少填一个。
|
||||
- Android / iOS / HarmonyOS 都直接使用客户端原始上报的 `model_raw`。
|
||||
- 页面会输出完整候选列表,并附带 `report_payload` 方便核对上报值。
|
||||
|
||||
### Mode B: 通用多字段模式(原有)
|
||||
### SQL 查询模式
|
||||
|
||||
`primaryName`:
|
||||
- 直接调用 Compose 内的 API 查询 MySQL 主表 `mobilemodels.mm_device_catalog`
|
||||
- 服务端会先把输入归一化成 `alias_norm`
|
||||
- 页面会显示实际执行的 SQL、结果列表和返回 JSON
|
||||
|
||||
- Put the strongest identifier first, such as `iPhone14,5`, `M2102J2SC`, `L55M5-AD`, `SM-G9910`.
|
||||
### 相关文档
|
||||
|
||||
`extraNames` (one line each):
|
||||
|
||||
- Paste all values you can get from client APIs, for example:
|
||||
|
||||
```text
|
||||
model=SM-G9910
|
||||
device=star2qltechn
|
||||
marketName=Galaxy S21
|
||||
buildProduct=o1qzcx
|
||||
```
|
||||
|
||||
`brand`:
|
||||
|
||||
- Optional. Supports aliases like `苹果`, `samsung`, `huawei`, `小米`.
|
||||
|
||||
`platform`:
|
||||
|
||||
- iOS / Android / HarmonyOS / Unknown
|
||||
- 页面内统一展示主推 SQL、兼容 SQL、归一化规则和文档入口
|
||||
- 可直接跳转查看 `misc/mysql-query-design.md`、`web/README.md`、`README.md`
|
||||
|
||||
## Output
|
||||
|
||||
@@ -103,8 +126,15 @@ buildProduct=o1qzcx
|
||||
- Supports drag-and-drop source ordering.
|
||||
- Ranking weight is higher for sources at the top.
|
||||
- Initial order puts `*_cn.md` before non-`cn` sources.
|
||||
- 原始数据同步(新增):
|
||||
- Available in the third left tab.
|
||||
- Calls the Compose service API to sync upstream `KHwang9883/MobileModels` raw markdown, rebuild `dist/device_index.json`, export MySQL seed, and reload MySQL.
|
||||
- Requires `docker compose up --build -d`.
|
||||
- 索引数据(新增):
|
||||
- Available in the fourth left tab.
|
||||
- Includes index reload and index load status.
|
||||
|
||||
## Notes
|
||||
|
||||
- If browser blocks local file fetch, start HTTP server as above.
|
||||
- You can also manually upload `dist/device_index.json` in the page.
|
||||
- Managed raw data, rebuilt index, and MySQL seed files are persisted in Docker volumes, not written back to the local workspace during runtime.
|
||||
- For production, override `MYSQL_ROOT_PASSWORD` and `MYSQL_READER_PASSWORD` with your own values.
|
||||
|
||||
@@ -117,6 +117,10 @@
|
||||
font-weight: 600;
|
||||
}
|
||||
button.primary { background: var(--brand); color: #fff; }
|
||||
button:disabled {
|
||||
opacity: 0.6;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
.result-head {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
@@ -253,6 +257,18 @@
|
||||
.manage-panel.hidden {
|
||||
display: none;
|
||||
}
|
||||
.sync-log {
|
||||
min-height: 240px;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
font-size: 12px;
|
||||
line-height: 1.45;
|
||||
background: #f6f8fb;
|
||||
border: 1px solid var(--line);
|
||||
border-radius: 10px;
|
||||
padding: 10px;
|
||||
margin: 0;
|
||||
}
|
||||
.hidden { display: none; }
|
||||
.modal-backdrop {
|
||||
position: fixed;
|
||||
@@ -305,7 +321,7 @@
|
||||
}
|
||||
.manage-tabs {
|
||||
position: static;
|
||||
grid-template-columns: 1fr 1fr;
|
||||
grid-template-columns: repeat(4, minmax(0, 1fr));
|
||||
}
|
||||
}
|
||||
</style>
|
||||
@@ -316,6 +332,7 @@
|
||||
<a href="/web/device_query.html" class="brand">MobileModels</a>
|
||||
<a href="/web/device_query.html" class="item">设备查询</a>
|
||||
<a href="/web/brand_management.html" class="item active">数据管理</a>
|
||||
<a href="/web/device_query.html?view=docs" class="item">相关文档</a>
|
||||
</div>
|
||||
</nav>
|
||||
|
||||
@@ -327,6 +344,8 @@
|
||||
<aside class="manage-tabs">
|
||||
<button id="tabBrandBtn" type="button" class="tab-btn active">品牌列表</button>
|
||||
<button id="tabSourceBtn" type="button" class="tab-btn">数据来源</button>
|
||||
<button id="tabSyncBtn" type="button" class="tab-btn">原始数据同步</button>
|
||||
<button id="tabIndexBtn" type="button" class="tab-btn">索引数据</button>
|
||||
</aside>
|
||||
|
||||
<div class="manage-content">
|
||||
@@ -374,6 +393,27 @@
|
||||
</ul>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section id="syncTabPanel" class="manage-panel hidden">
|
||||
<h3 class="title">原始数据同步</h3>
|
||||
<p class="sub">从上游 `KHwang9883/MobileModels` 拉取原始 markdown 数据,并重建 `dist/device_index.json`、刷新 MySQL。请先使用 `docker compose up --build -d` 启动完整服务。</p>
|
||||
<div class="btns">
|
||||
<button id="syncUpstreamBtn" type="button" class="primary">开始同步原始数据</button>
|
||||
<button id="refreshSyncStatusBtn" type="button">刷新同步状态</button>
|
||||
</div>
|
||||
<div id="syncStatus" class="sub">正在检测同步能力。</div>
|
||||
<pre id="syncLog" class="sync-log mono">暂无同步记录</pre>
|
||||
</section>
|
||||
|
||||
<section id="indexTabPanel" class="manage-panel hidden">
|
||||
<h3 class="title">索引数据</h3>
|
||||
<p class="sub">这里集中显示 `dist/device_index.json` 的加载状态与基础统计,并提供手动重新加载入口。</p>
|
||||
<div class="btns">
|
||||
<button id="reloadIndexBtn" type="button" class="primary">重新加载索引</button>
|
||||
</div>
|
||||
<div id="indexStatus" class="sub">索引尚未加载。</div>
|
||||
<pre id="indexSummary" class="sync-log mono">暂无索引信息</pre>
|
||||
</section>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
@@ -420,8 +460,22 @@
|
||||
const sourceOrderListEl = document.getElementById("sourceOrderList");
|
||||
const tabBrandBtnEl = document.getElementById("tabBrandBtn");
|
||||
const tabSourceBtnEl = document.getElementById("tabSourceBtn");
|
||||
const tabSyncBtnEl = document.getElementById("tabSyncBtn");
|
||||
const tabIndexBtnEl = document.getElementById("tabIndexBtn");
|
||||
const brandTabPanelEl = document.getElementById("brandTabPanel");
|
||||
const sourceTabPanelEl = document.getElementById("sourceTabPanel");
|
||||
const syncTabPanelEl = document.getElementById("syncTabPanel");
|
||||
const indexTabPanelEl = document.getElementById("indexTabPanel");
|
||||
const syncStatusEl = document.getElementById("syncStatus");
|
||||
const syncLogEl = document.getElementById("syncLog");
|
||||
const syncUpstreamBtnEl = document.getElementById("syncUpstreamBtn");
|
||||
const refreshSyncStatusBtnEl = document.getElementById("refreshSyncStatusBtn");
|
||||
const reloadIndexBtnEl = document.getElementById("reloadIndexBtn");
|
||||
const indexStatusEl = document.getElementById("indexStatus");
|
||||
const indexSummaryEl = document.getElementById("indexSummary");
|
||||
|
||||
let syncSupported = false;
|
||||
let syncRunning = false;
|
||||
|
||||
function normalizeText(text) {
|
||||
return (text || "").toLowerCase().replace(/[^0-9a-z\u4e00-\u9fff]+/g, "");
|
||||
@@ -436,6 +490,107 @@
|
||||
.replace(/'/g, "'");
|
||||
}
|
||||
|
||||
async function fetchJson(url, options = {}) {
|
||||
const resp = await fetch(url, options);
|
||||
const data = await resp.json().catch(() => ({}));
|
||||
if (!resp.ok) {
|
||||
throw new Error((data && data.error) || `HTTP ${resp.status}`);
|
||||
}
|
||||
return data;
|
||||
}
|
||||
|
||||
function updateSyncButtons() {
|
||||
syncUpstreamBtnEl.disabled = syncRunning || !syncSupported;
|
||||
refreshSyncStatusBtnEl.disabled = syncRunning;
|
||||
}
|
||||
|
||||
function renderIndexStatus(message, details) {
|
||||
indexStatusEl.textContent = message || "索引尚未加载。";
|
||||
indexSummaryEl.textContent = details || "暂无索引信息";
|
||||
}
|
||||
|
||||
function renderSyncLog(data, fallbackTitle) {
|
||||
if (!data) {
|
||||
syncLogEl.textContent = fallbackTitle || "暂无同步记录";
|
||||
return;
|
||||
}
|
||||
|
||||
const lines = [];
|
||||
if (fallbackTitle) lines.push(fallbackTitle);
|
||||
if (data.data_root) lines.push(`数据目录: ${data.data_root}`);
|
||||
if (data.repo_root) lines.push(`应用目录: ${data.repo_root}`);
|
||||
if (data.storage_mode) lines.push(`存储模式: ${data.storage_mode}`);
|
||||
if (data.upstream_repo_url) lines.push(`上游仓库: ${data.upstream_repo_url}`);
|
||||
if (data.upstream_branch) lines.push(`上游分支: ${data.upstream_branch}`);
|
||||
if (data.last_sync_time) lines.push(`最近同步时间: ${data.last_sync_time}`);
|
||||
if (data.last_upstream_commit) lines.push(`最近同步提交: ${data.last_upstream_commit}`);
|
||||
if (data.index_file) lines.push(`索引文件: ${data.index_file}`);
|
||||
if (data.index_mtime) lines.push(`索引更新时间: ${data.index_mtime}`);
|
||||
if (data.mysql_seed_file) lines.push(`MySQL Seed: ${data.mysql_seed_file}`);
|
||||
if (data.mysql_seed_mtime) lines.push(`MySQL Seed 更新时间: ${data.mysql_seed_mtime}`);
|
||||
if (data.mysql_host && data.mysql_port) lines.push(`MySQL 地址: ${data.mysql_host}:${data.mysql_port}`);
|
||||
if (data.mysql_database) lines.push(`MySQL 数据库: ${data.mysql_database}`);
|
||||
if (data.mysql_reader_user) lines.push(`MySQL 只读账号: ${data.mysql_reader_user}`);
|
||||
if (typeof data.mysql_ready === "boolean") lines.push(`MySQL 状态: ${data.mysql_ready ? "ready" : "not ready"}`);
|
||||
if (data.mysql_status) lines.push(`MySQL 详情: ${data.mysql_status}`);
|
||||
if (data.output) {
|
||||
lines.push("");
|
||||
lines.push("同步输出:");
|
||||
lines.push(data.output);
|
||||
}
|
||||
syncLogEl.textContent = lines.join("\n").trim() || "暂无同步记录";
|
||||
}
|
||||
|
||||
async function loadSyncStatus(options = {}) {
|
||||
const preserveLog = !!options.preserveLog;
|
||||
syncStatusEl.textContent = "正在检测同步能力。";
|
||||
try {
|
||||
const data = await fetchJson("/api/status", { cache: "no-store" });
|
||||
syncSupported = !!data.supports_upstream_sync;
|
||||
syncStatusEl.textContent = syncSupported
|
||||
? "已连接 Docker Compose 服务,可以直接从页面同步原始数据、索引和 MySQL。"
|
||||
: "当前服务不支持原始数据同步。";
|
||||
if (!preserveLog) {
|
||||
renderSyncLog(data, "服务状态");
|
||||
}
|
||||
} catch (err) {
|
||||
syncSupported = false;
|
||||
syncStatusEl.textContent = `当前页面未连接支持同步的 Docker Compose 服务:${err.message}`;
|
||||
if (!preserveLog) {
|
||||
syncLogEl.textContent = "请使用 `docker compose up --build -d` 启动完整服务后,再使用这个功能。";
|
||||
}
|
||||
} finally {
|
||||
updateSyncButtons();
|
||||
}
|
||||
}
|
||||
|
||||
async function runUpstreamSync() {
|
||||
if (syncRunning) return;
|
||||
syncRunning = true;
|
||||
updateSyncButtons();
|
||||
syncStatusEl.textContent = "正在同步原始数据、重建索引并刷新 MySQL,请稍候。";
|
||||
syncLogEl.textContent = "同步进行中...";
|
||||
|
||||
try {
|
||||
const data = await fetchJson("/api/sync-upstream", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: "{}",
|
||||
});
|
||||
syncSupported = true;
|
||||
syncStatusEl.textContent = "同步完成,页面索引已刷新。";
|
||||
renderSyncLog(data, "同步完成");
|
||||
await loadIndexFromPath();
|
||||
} catch (err) {
|
||||
syncSupported = false;
|
||||
syncStatusEl.textContent = `同步失败: ${err.message}`;
|
||||
syncLogEl.textContent = `同步失败\n${err.message}`;
|
||||
} finally {
|
||||
syncRunning = false;
|
||||
await loadSyncStatus({ preserveLog: true });
|
||||
}
|
||||
}
|
||||
|
||||
function normalizeAliasList(name, aliases) {
|
||||
const out = [];
|
||||
const seen = new Set();
|
||||
@@ -1031,6 +1186,7 @@
|
||||
|
||||
async function loadIndexFromPath() {
|
||||
try {
|
||||
renderIndexStatus("正在加载 dist/device_index.json ...", "加载中...");
|
||||
const resp = await fetch("../dist/device_index.json", { cache: "no-cache" });
|
||||
if (!resp.ok) throw new Error(`HTTP ${resp.status}`);
|
||||
indexData = await resp.json();
|
||||
@@ -1039,9 +1195,32 @@
|
||||
rebuildManagedBrandIndexes();
|
||||
renderBrandStats();
|
||||
renderSourceOrder();
|
||||
const sourceCount = Array.isArray(managedSourceConfig && managedSourceConfig.order)
|
||||
? managedSourceConfig.order.length
|
||||
: 0;
|
||||
const brandCount = managedBrandConfig && Array.isArray(managedBrandConfig.brands)
|
||||
? managedBrandConfig.brands.length
|
||||
: 0;
|
||||
const manufacturerCount = managedBrandConfig && Array.isArray(managedBrandConfig.manufacturers)
|
||||
? managedBrandConfig.manufacturers.length
|
||||
: 0;
|
||||
renderIndexStatus(
|
||||
`索引加载成功: records=${indexData.total_records}, lookup=${Object.keys(indexData.lookup || {}).length}`,
|
||||
[
|
||||
`generated_on: ${indexData.generated_on || "(unknown)"}`,
|
||||
`total_records: ${indexData.total_records || 0}`,
|
||||
`lookup_keys: ${Object.keys(indexData.lookup || {}).length}`,
|
||||
`brand_count: ${brandCount}`,
|
||||
`manufacturer_count: ${manufacturerCount}`,
|
||||
`source_count: ${sourceCount}`,
|
||||
].join("\n")
|
||||
);
|
||||
} catch (err) {
|
||||
brandStatsEl.textContent = `索引加载失败: ${err.message}`;
|
||||
sourceOrderStatsEl.textContent = `索引加载失败: ${err.message}`;
|
||||
renderIndexStatus(`索引加载失败: ${err.message}`, `索引加载失败\n${err.message}`);
|
||||
brandStatsEl.textContent = "索引不可用,暂无品牌关系数据。";
|
||||
sourceOrderStatsEl.textContent = "索引不可用,暂无来源数据。";
|
||||
brandRelationBodyEl.innerHTML = `<tr><td colspan="3" class="sub">暂无关系数据</td></tr>`;
|
||||
sourceOrderListEl.innerHTML = `<li class="sub">暂无来源数据</li>`;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1052,6 +1231,9 @@
|
||||
document.getElementById("resetSourceOrderBtn").addEventListener("click", resetSourceOrder);
|
||||
brandCountBtnEl.addEventListener("click", openBrandListModal);
|
||||
manufacturerCountBtnEl.addEventListener("click", openManufacturerListModal);
|
||||
syncUpstreamBtnEl.addEventListener("click", runUpstreamSync);
|
||||
refreshSyncStatusBtnEl.addEventListener("click", loadSyncStatus);
|
||||
reloadIndexBtnEl.addEventListener("click", loadIndexFromPath);
|
||||
|
||||
brandModalCancelBtnEl.addEventListener("click", closeBrandModal);
|
||||
brandModalBackdropEl.addEventListener("click", (e) => {
|
||||
@@ -1069,17 +1251,27 @@
|
||||
|
||||
function switchManageTab(tab) {
|
||||
const isBrand = tab === "brand";
|
||||
const isSource = tab === "source";
|
||||
const isSync = tab === "sync";
|
||||
const isIndex = tab === "index";
|
||||
tabBrandBtnEl.classList.toggle("active", isBrand);
|
||||
tabSourceBtnEl.classList.toggle("active", !isBrand);
|
||||
tabSourceBtnEl.classList.toggle("active", isSource);
|
||||
tabSyncBtnEl.classList.toggle("active", isSync);
|
||||
tabIndexBtnEl.classList.toggle("active", isIndex);
|
||||
brandTabPanelEl.classList.toggle("hidden", !isBrand);
|
||||
sourceTabPanelEl.classList.toggle("hidden", isBrand);
|
||||
sourceTabPanelEl.classList.toggle("hidden", !isSource);
|
||||
syncTabPanelEl.classList.toggle("hidden", !isSync);
|
||||
indexTabPanelEl.classList.toggle("hidden", !isIndex);
|
||||
}
|
||||
|
||||
tabBrandBtnEl.addEventListener("click", () => switchManageTab("brand"));
|
||||
tabSourceBtnEl.addEventListener("click", () => switchManageTab("source"));
|
||||
tabSyncBtnEl.addEventListener("click", () => switchManageTab("sync"));
|
||||
tabIndexBtnEl.addEventListener("click", () => switchManageTab("index"));
|
||||
|
||||
switchManageTab("brand");
|
||||
loadIndexFromPath();
|
||||
loadSyncStatus();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
181
web/doc_viewer.html
Normal file
181
web/doc_viewer.html
Normal file
@@ -0,0 +1,181 @@
|
||||
<!doctype html>
|
||||
<html lang="zh-CN">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>MobileModels 文档查看</title>
|
||||
<style>
|
||||
:root {
|
||||
--bg: #f5f7fb;
|
||||
--card: #ffffff;
|
||||
--text: #1c2430;
|
||||
--sub: #566173;
|
||||
--line: #d9e0ea;
|
||||
--brand: #0f6fff;
|
||||
}
|
||||
* { box-sizing: border-box; }
|
||||
body {
|
||||
margin: 0;
|
||||
font-family: "PingFang SC", "Noto Sans SC", "Microsoft YaHei", sans-serif;
|
||||
background: radial-gradient(circle at 0 0, #eef4ff 0, var(--bg) 40%), var(--bg);
|
||||
color: var(--text);
|
||||
}
|
||||
.top-nav {
|
||||
background: linear-gradient(180deg, #1f2a3a, #1a2431);
|
||||
border-bottom: 1px solid rgba(255, 255, 255, 0.08);
|
||||
}
|
||||
.top-nav-inner {
|
||||
max-width: 1200px;
|
||||
margin: 0 auto;
|
||||
padding: 0 16px;
|
||||
height: 52px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
.top-nav .brand,
|
||||
.top-nav .item {
|
||||
color: #d6e3f7;
|
||||
text-decoration: none;
|
||||
font-size: 14px;
|
||||
padding: 6px 10px;
|
||||
border-radius: 6px;
|
||||
}
|
||||
.top-nav .brand {
|
||||
font-weight: 700;
|
||||
margin-right: 8px;
|
||||
color: #f4f8ff;
|
||||
}
|
||||
.top-nav .item.active {
|
||||
background: rgba(255, 255, 255, 0.16);
|
||||
color: #ffffff;
|
||||
font-weight: 600;
|
||||
}
|
||||
.wrap {
|
||||
max-width: 1200px;
|
||||
margin: 24px auto;
|
||||
padding: 0 16px 32px;
|
||||
display: grid;
|
||||
gap: 16px;
|
||||
}
|
||||
.card {
|
||||
background: var(--card);
|
||||
border: 1px solid var(--line);
|
||||
border-radius: 14px;
|
||||
padding: 14px;
|
||||
box-shadow: 0 6px 18px rgba(36, 56, 89, 0.06);
|
||||
}
|
||||
.title {
|
||||
margin: 0 0 8px;
|
||||
font-size: 18px;
|
||||
font-weight: 700;
|
||||
}
|
||||
.sub {
|
||||
margin: 0 0 14px;
|
||||
color: var(--sub);
|
||||
font-size: 13px;
|
||||
line-height: 1.5;
|
||||
}
|
||||
.btns {
|
||||
display: flex;
|
||||
gap: 8px;
|
||||
flex-wrap: wrap;
|
||||
margin-top: 12px;
|
||||
}
|
||||
.btn {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
padding: 9px 14px;
|
||||
border-radius: 10px;
|
||||
border: 1px solid #c8d6ee;
|
||||
background: #f7faff;
|
||||
color: #244775;
|
||||
text-decoration: none;
|
||||
font-size: 13px;
|
||||
font-weight: 700;
|
||||
}
|
||||
.btn:hover {
|
||||
background: #eef5ff;
|
||||
}
|
||||
pre {
|
||||
margin: 0;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
font-size: 13px;
|
||||
line-height: 1.65;
|
||||
background: #f6f8fb;
|
||||
border: 1px solid var(--line);
|
||||
border-radius: 10px;
|
||||
padding: 14px;
|
||||
overflow: auto;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<nav class="top-nav">
|
||||
<div class="top-nav-inner">
|
||||
<a href="/web/device_query.html" class="brand">MobileModels</a>
|
||||
<a href="/web/device_query.html" class="item">设备查询</a>
|
||||
<a href="/web/brand_management.html" class="item">数据管理</a>
|
||||
<a href="/web/device_query.html?view=docs" class="item active">相关文档</a>
|
||||
</div>
|
||||
</nav>
|
||||
|
||||
<div class="wrap">
|
||||
<section class="card">
|
||||
<h1 class="title" id="docTitle">文档查看</h1>
|
||||
<p class="sub" id="docPath">正在加载文档...</p>
|
||||
<div class="btns">
|
||||
<a class="btn" href="/web/doc_viewer.html?path=/misc/mysql-query-design.md&title=MySQL%20%E8%AE%BE%E8%AE%A1%E8%AF%B4%E6%98%8E">MySQL 设计说明</a>
|
||||
<a class="btn" href="/web/doc_viewer.html?path=/web/README.md&title=Web%20%E4%BD%BF%E7%94%A8%E8%AF%B4%E6%98%8E">Web 使用说明</a>
|
||||
<a class="btn" href="/web/doc_viewer.html?path=/README.md&title=%E9%A1%B9%E7%9B%AE%20README">项目 README</a>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="card">
|
||||
<pre id="docContent">加载中...</pre>
|
||||
</section>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
const ALLOWED_DOCS = new Map([
|
||||
["/misc/mysql-query-design.md", "MySQL 设计说明"],
|
||||
["/web/README.md", "Web 使用说明"],
|
||||
["/README.md", "项目 README"],
|
||||
]);
|
||||
|
||||
async function main() {
|
||||
const params = new URLSearchParams(window.location.search);
|
||||
const path = params.get("path") || "/misc/mysql-query-design.md";
|
||||
const title = params.get("title") || ALLOWED_DOCS.get(path) || "文档查看";
|
||||
const docTitleEl = document.getElementById("docTitle");
|
||||
const docPathEl = document.getElementById("docPath");
|
||||
const docContentEl = document.getElementById("docContent");
|
||||
|
||||
if (!ALLOWED_DOCS.has(path)) {
|
||||
docTitleEl.textContent = "文档不存在";
|
||||
docPathEl.textContent = path;
|
||||
docContentEl.textContent = "当前只允许查看预设文档。";
|
||||
return;
|
||||
}
|
||||
|
||||
document.title = `${title} - MobileModels`;
|
||||
docTitleEl.textContent = title;
|
||||
docPathEl.textContent = path;
|
||||
|
||||
try {
|
||||
const resp = await fetch(path, { cache: "no-store" });
|
||||
if (!resp.ok) {
|
||||
throw new Error(`HTTP ${resp.status}`);
|
||||
}
|
||||
const text = await resp.text();
|
||||
docContentEl.textContent = text;
|
||||
} catch (err) {
|
||||
docContentEl.textContent = `加载失败\n${err.message || err}`;
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
Reference in New Issue
Block a user