Compare commits

...

6 Commits

Author SHA1 Message Date
Vinlic
ad10146152 Release 0.0.37 2025-04-28 13:30:48 +08:00
Vinlic
61e6596352 修改重试次数 2025-04-28 13:30:35 +08:00
Vinlic
1e4d23d109 Release 0.0.36 2025-04-28 13:17:17 +08:00
Vinlic
8de7a4e1ec 跟进官网反逆向策略并支持最新推理模型和沉思模型 2025-04-28 13:17:14 +08:00
Vinlic
2f12a5daef Release 0.0.35 2024-12-31 12:47:47 +08:00
Vinlic
2ce738b1fc 修复非预期模型名处理 2024-12-31 12:47:16 +08:00
5 changed files with 228 additions and 120 deletions

View File

@ -9,7 +9,7 @@
![](https://img.shields.io/github/forks/llm-red-team/glm-free-api.svg) ![](https://img.shields.io/github/forks/llm-red-team/glm-free-api.svg)
![](https://img.shields.io/docker/pulls/vinlic/glm-free-api.svg) ![](https://img.shields.io/docker/pulls/vinlic/glm-free-api.svg)
支持GLM-4-Plus高速流式输出、支持多轮对话、支持智能体对话、支持Zero思考推理模型、支持视频生成、支持AI绘图、支持联网搜索、支持长文档解读、支持图像解析零配置部署多路token支持自动清理会话痕迹。 支持GLM-4-Plus高速流式输出、支持多轮对话、支持智能体对话、支持沉思模型、支持Zero思考推理模型、支持视频生成、支持AI绘图、支持联网搜索、支持长文档解读、支持图像解析零配置部署多路token支持自动清理会话痕迹。
与ChatGPT接口完全兼容。 与ChatGPT接口完全兼容。
@ -37,28 +37,40 @@ MiniMax海螺AI接口转API [hailuo-free-api](https://github.com/LLM-Red-T
## 目录 ## 目录
* [免责声明](#免责声明) - [GLM AI Free 服务](#glm-ai-free-服务)
* [效果示例](#效果示例) - [目录](#目录)
* [接入准备](#接入准备) - [免责声明](#免责声明)
* [智能体接入](#智能体接入) - [效果示例](#效果示例)
* [多账号接入](#多账号接入) - [验明正身Demo](#验明正身demo)
* [Docker部署](#Docker部署) - [智能体对话Demo](#智能体对话demo)
* [Docker-compose部署](#Docker-compose部署) - [结合Dify工作流Demo](#结合dify工作流demo)
* [Render部署](#Render部署) - [多轮对话Demo](#多轮对话demo)
* [Vercel部署](#Vercel部署) - [视频生成Demo](#视频生成demo)
* [原生部署](#原生部署) - [AI绘图Demo](#ai绘图demo)
* [推荐使用客户端](#推荐使用客户端) - [联网搜索Demo](#联网搜索demo)
* [接口列表](#接口列表) - [长文档解读Demo](#长文档解读demo)
* [对话补全](#对话补全) - [代码调用Demo](#代码调用demo)
* [视频生成](#视频生成) - [图像解析Demo](#图像解析demo)
* [AI绘图](#AI绘图) - [接入准备](#接入准备)
* [文档解读](#文档解读) - [智能体接入](#智能体接入)
* [图像解析](#图像解析) - [多账号接入](#多账号接入)
* [refresh_token存活检测](#refresh_token存活检测) - [Docker部署](#docker部署)
* [注意事项](#注意事项) - [Docker-compose部署](#docker-compose部署)
* [Nginx反代优化](#Nginx反代优化) - [Render部署](#render部署)
* [Token统计](#Token统计) - [Vercel部署](#vercel部署)
* [Star History](#star-history) - [原生部署](#原生部署)
- [推荐使用客户端](#推荐使用客户端)
- [接口列表](#接口列表)
- [对话补全](#对话补全)
- [视频生成](#视频生成)
- [AI绘图](#ai绘图)
- [文档解读](#文档解读)
- [图像解析](#图像解析)
- [refresh\_token存活检测](#refresh_token存活检测)
- [注意事项](#注意事项)
- [Nginx反代优化](#nginx反代优化)
- [Token统计](#token统计)
- [Star History](#star-history)
## 免责声明 ## 免责声明
@ -288,6 +300,7 @@ Authorization: Bearer [refresh_token]
{ {
// 默认模型glm-4-plus // 默认模型glm-4-plus
// zero思考推理模型glm-4-zero / glm-4-think // zero思考推理模型glm-4-zero / glm-4-think
// 沉思模型glm-4-deepresearch
// 如果使用智能体请填写智能体ID到此处 // 如果使用智能体请填写智能体ID到此处
"model": "glm-4-plus", "model": "glm-4-plus",
// 目前多轮对话基于消息合并实现某些场景可能导致能力下降且受单轮最大token数限制 // 目前多轮对话基于消息合并实现某些场景可能导致能力下降且受单轮最大token数限制

View File

@ -5,7 +5,7 @@
![](https://img.shields.io/github/forks/llm-red-team/glm-free-api.svg) ![](https://img.shields.io/github/forks/llm-red-team/glm-free-api.svg)
![](https://img.shields.io/docker/pulls/vinlic/glm-free-api.svg) ![](https://img.shields.io/docker/pulls/vinlic/glm-free-api.svg)
Supports high-speed streaming output, multi-turn dialogues, internet search, long document reading, image analysis, zero-configuration deployment, multi-token support, and automatic session trace cleanup. Supports high-speed streaming output, multi-turn dialogues, internet search, long document reading, image analysis, deepresearch, zero-configuration deployment, multi-token support, and automatic session trace cleanup.
Fully compatible with the ChatGPT interface. Fully compatible with the ChatGPT interface.
@ -33,29 +33,41 @@ Lingxin Intelligence (Emohaa) API to API [emohaa-free-api](https://github.com/LL
## Table of Contents ## Table of Contents
* [Announcement](#Announcement) - [GLM AI Free Service](#glm-ai-free-service)
* [Online Experience](#Online-Experience) - [Table of Contents](#table-of-contents)
* [Effect Examples](#Effect-Examples) - [Announcement](#announcement)
* [Access Preparation](#Access-Preparation) - [Online Experience](#online-experience)
* [Agent Access](#Agent-Access) - [Effect Examples](#effect-examples)
* [Multiple Account Access](#Multiple-Account-Access) - [Identity Verification](#identity-verification)
* [Docker Deployment](#Docker-Deployment) - [AI-Agent](#ai-agent)
* [Docker-compose Deployment](#Docker-compose-Deployment) - [Combined with Dify workflow](#combined-with-dify-workflow)
* [Render Deployment](#Render-Deployment) - [Multi-turn Dialogue](#multi-turn-dialogue)
* [Vercel Deployment](#Vercel-Deployment) - [Video Generation](#video-generation)
* [Native Deployment](#Native-Deployment) - [AI Drawing](#ai-drawing)
* [Recommended Clients](#Recommended-Clients) - [Internet Search](#internet-search)
* [Interface List](#Interface-List) - [Long Document Reading](#long-document-reading)
* [Conversation Completion](#Conversation-Completion) - [Using Code](#using-code)
* [Video Generation](#Video-Generation) - [Image Analysis](#image-analysis)
* [AI Drawing](#AI-Drawing) - [Access Preparation](#access-preparation)
* [Document Interpretation](#Document-Interpretation) - [Agent Access](#agent-access)
* [Image Analysis](#Image-Analysis) - [Multiple Account Access](#multiple-account-access)
* [Refresh_token Survival Detection](#Refresh_token-Survival-Detection) - [Docker Deployment](#docker-deployment)
* [Notification](#Notification) - [Docker-compose Deployment](#docker-compose-deployment)
* [Nginx Anti-generation Optimization](#Nginx-Anti-generation-Optimization) - [Render Deployment](#render-deployment)
* [Token Statistics](#Token-Statistics) - [Vercel Deployment](#vercel-deployment)
* [Star History](#star-history) - [Native Deployment](#native-deployment)
- [Recommended Clients](#recommended-clients)
- [interface List](#interface-list)
- [Conversation Completion](#conversation-completion)
- [Video Generation](#video-generation-1)
- [AI Drawing](#ai-drawing-1)
- [Document Interpretation](#document-interpretation)
- [Image Analysis](#image-analysis-1)
- [Refresh\_token Survival Detection](#refresh_token-survival-detection)
- [Notification](#notification)
- [Nginx Anti-generation Optimization](#nginx-anti-generation-optimization)
- [Token Statistics](#token-statistics)
- [Star History](#star-history)
## Announcement ## Announcement
@ -291,6 +303,7 @@ Request data:
{ {
// Default model: glm-4-plus // Default model: glm-4-plus
// zero thinking model: glm-4-zero / glm-4-think // zero thinking model: glm-4-zero / glm-4-think
// deepresearch model: glm-4-deepresearch
// If using the Agent, fill in the Agent ID here // If using the Agent, fill in the Agent ID here
"model": "glm-4", "model": "glm-4",
// Currently, multi-round conversations are realized based on message merging, which in some scenarios may lead to capacity degradation and is limited by the maximum number of tokens in a single round. // Currently, multi-round conversations are realized based on message merging, which in some scenarios may lead to capacity degradation and is limited by the maximum number of tokens in a single round.

View File

@ -1,6 +1,6 @@
{ {
"name": "glm-free-api", "name": "glm-free-api",
"version": "0.0.34", "version": "0.0.37",
"description": "GLM Free API Server", "description": "GLM Free API Server",
"type": "module", "type": "module",
"main": "dist/index.js", "main": "dist/index.js",

View File

@ -3,7 +3,6 @@ import path from "path";
import _ from "lodash"; import _ from "lodash";
import mime from "mime"; import mime from "mime";
import sharp from "sharp"; import sharp from "sharp";
import fs from "fs-extra";
import FormData from "form-data"; import FormData from "form-data";
import axios, { AxiosResponse } from "axios"; import axios, { AxiosResponse } from "axios";
@ -17,8 +16,8 @@ import util from "@/lib/util.ts";
const MODEL_NAME = "glm"; const MODEL_NAME = "glm";
// 默认的智能体IDGLM4 // 默认的智能体IDGLM4
const DEFAULT_ASSISTANT_ID = "65940acff94777010aa6b796"; const DEFAULT_ASSISTANT_ID = "65940acff94777010aa6b796";
// zero推理模型智能体ID // 签名密钥(官网变化记得更新)
const ZERO_ASSISTANT_ID = "676411c38945bbc58a905d31"; const SIGN_SECRET = "8a1317a7468aa3ad86e997d08f3f31cb";
// access_token有效期 // access_token有效期
const ACCESS_TOKEN_EXPIRES = 3600; const ACCESS_TOKEN_EXPIRES = 3600;
// 最大重试次数 // 最大重试次数
@ -27,17 +26,28 @@ const MAX_RETRY_COUNT = 3;
const RETRY_DELAY = 5000; const RETRY_DELAY = 5000;
// 伪装headers // 伪装headers
const FAKE_HEADERS = { const FAKE_HEADERS = {
Accept: "*/*", "Accept": "application/json, text/plain, */*",
"Accept-Encoding": "gzip, deflate, br, zstd",
"Accept-Language": "zh-CN,zh;q=0.9,en;q=0.8",
"Cache-Control": "no-cache",
"App-Name": "chatglm", "App-Name": "chatglm",
Platform: "pc", "Origin": "https://chatglm.cn",
Origin: "https://chatglm.cn", "Pragma": "no-cache",
"Sec-Ch-Ua": "sec-ch-ua":
'"Chromium";v="122", "Not(A:Brand";v="24", "Google Chrome";v="122"', '"Chromium";v="134", "Not:A-Brand";v="24", "Google Chrome";v="134"',
"Sec-Ch-Ua-Mobile": "?0", "sec-ch-ua-mobile": "?0",
"Sec-Ch-Ua-Platform": '"Windows"', "sec-ch-ua-platform": '"macOS"',
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-origin",
"User-Agent": "User-Agent":
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
Version: "0.0.1", 'X-App-Platform': 'pc',
'X-App-Version': '0.0.1',
'X-Device-Brand': '',
'X-Device-Model': '',
'X-Exp-Groups': 'na_android_config:exp:NA,na_4o_config:exp:4o_A,na_glm4plus_config:exp:open,mainchat_server_app:exp:A,mobile_history_daycheck:exp:a,desktop_toolbar:exp:A,chat_drawing_server:exp:A,drawing_server_cogview:exp:cogview4,app_welcome_v2:exp:B,chat_drawing_streamv2:exp:A,mainchat_rm_fc:exp:add,mainchat_dr:exp:open,chat_auto_entrance:exp:A',
'X-Lang': 'zh'
}; };
// 文件最大大小 // 文件最大大小
const FILE_MAX_SIZE = 100 * 1024 * 1024; const FILE_MAX_SIZE = 100 * 1024 * 1024;
@ -46,6 +56,29 @@ const accessTokenMap = new Map();
// access_token请求队列映射 // access_token请求队列映射
const accessTokenRequestQueueMap: Record<string, Function[]> = {}; const accessTokenRequestQueueMap: Record<string, Function[]> = {};
/**
* sign
*/
async function generateSign() {
// 智谱的时间戳算法(官网变化记得更新)
const e = Date.now()
, A = e.toString()
, t = A.length
, o = A.split("").map((e => Number(e)))
, i = o.reduce(( (e, A) => e + A), 0) - o[t - 2]
, a = i % 10;
const timestamp = A.substring(0, t - 2) + a + A.substring(t - 1, t);
// 随机UUID
const nonce = util.uuid(false);
// 签名
const sign = util.md5(`${timestamp}-${nonce}-${SIGN_SECRET}`);
return {
timestamp,
nonce,
sign
}
}
/** /**
* access_token * access_token
* *
@ -61,26 +94,32 @@ async function requestToken(refreshToken: string) {
accessTokenRequestQueueMap[refreshToken] = []; accessTokenRequestQueueMap[refreshToken] = [];
logger.info(`Refresh token: ${refreshToken}`); logger.info(`Refresh token: ${refreshToken}`);
const result = await (async () => { const result = await (async () => {
// 生成sign
const sign = await generateSign();
const result = await axios.post( const result = await axios.post(
"https://chatglm.cn/chatglm/backend-api/v1/user/refresh", "https://chatglm.cn/chatglm/user-api/user/refresh",
{}, {},
{ {
headers: { headers: {
// Referer: "https://chatglm.cn/main/alltoolsdetail",
Authorization: `Bearer ${refreshToken}`, Authorization: `Bearer ${refreshToken}`,
Referer: "https://chatglm.cn/main/alltoolsdetail", "Content-Type": "application/json",
"X-Device-Id": util.uuid(false),
"X-Request-Id": util.uuid(false),
...FAKE_HEADERS, ...FAKE_HEADERS,
"X-Device-Id": util.uuid(false),
"X-Nonce": sign.nonce,
"X-Request-Id": util.uuid(false),
"X-Sign": sign.sign,
"X-Timestamp": `${sign.timestamp}`,
}, },
timeout: 15000, timeout: 15000,
validateStatus: () => true, validateStatus: () => true,
} }
); );
const { result: _result } = checkResult(result, refreshToken); const { result: _result } = checkResult(result, refreshToken);
const { accessToken } = _result; const { access_token, refresh_token } = _result;
return { return {
accessToken, accessToken: access_token,
refreshToken, refreshToken: refresh_token,
refreshTime: util.unixTimestamp() + ACCESS_TOKEN_EXPIRES, refreshTime: util.unixTimestamp() + ACCESS_TOKEN_EXPIRES,
}; };
})() })()
@ -140,7 +179,7 @@ async function removeConversation(
assistantId = DEFAULT_ASSISTANT_ID assistantId = DEFAULT_ASSISTANT_ID
) { ) {
const token = await acquireToken(refreshToken); const token = await acquireToken(refreshToken);
const sign = await generateSign();
const result = await axios.post( const result = await axios.post(
"https://chatglm.cn/chatglm/backend-api/assistant/conversation/delete", "https://chatglm.cn/chatglm/backend-api/assistant/conversation/delete",
{ {
@ -153,6 +192,9 @@ async function removeConversation(
Referer: `https://chatglm.cn/main/alltoolsdetail`, Referer: `https://chatglm.cn/main/alltoolsdetail`,
"X-Device-Id": util.uuid(false), "X-Device-Id": util.uuid(false),
"X-Request-Id": util.uuid(false), "X-Request-Id": util.uuid(false),
"X-Sign": sign.sign,
"X-Timestamp": sign.timestamp,
"X-Nonce": sign.nonce,
...FAKE_HEADERS, ...FAKE_HEADERS,
}, },
timeout: 15000, timeout: 15000,
@ -191,15 +233,22 @@ async function createCompletion(
// 如果引用对话ID不正确则重置引用 // 如果引用对话ID不正确则重置引用
if (!/[0-9a-zA-Z]{24}/.test(refConvId)) refConvId = ""; if (!/[0-9a-zA-Z]{24}/.test(refConvId)) refConvId = "";
let assistantId = /^[a-z0-9]{24,}$/.test(model) ? model : undefined; let assistantId = /^[a-z0-9]{24,}$/.test(model) ? model : DEFAULT_ASSISTANT_ID;
let chatMode = '';
if(model.indexOf('think') != -1 || model.indexOf('zero') != -1) { if(model.indexOf('think') != -1 || model.indexOf('zero') != -1) {
assistantId = ZERO_ASSISTANT_ID; chatMode = 'zero';
logger.info('使用思考模型'); logger.info('使用【推理】模型');
}
if(model.indexOf('deepresearch') != -1) {
chatMode = 'deep_research';
logger.info('使用【沉思DeepResearch】模型');
} }
// 请求流 // 请求流
const token = await acquireToken(refreshToken); const token = await acquireToken(refreshToken);
const sign = await generateSign();
const result = await axios.post( const result = await axios.post(
"https://chatglm.cn/chatglm/backend-api/assistant/stream", "https://chatglm.cn/chatglm/backend-api/assistant/stream",
{ {
@ -208,9 +257,11 @@ async function createCompletion(
messages: messagesPrepare(messages, refs, !!refConvId), messages: messagesPrepare(messages, refs, !!refConvId),
meta_data: { meta_data: {
channel: "", channel: "",
chat_mode: chatMode || undefined,
draft_id: "", draft_id: "",
if_plus_model: true, if_plus_model: true,
input_question_type: "xxxx", input_question_type: "xxxx",
is_networking: true,
is_test: false, is_test: false,
platform: "pc", platform: "pc",
quote_log_id: "" quote_log_id: ""
@ -219,13 +270,12 @@ async function createCompletion(
{ {
headers: { headers: {
Authorization: `Bearer ${token}`, Authorization: `Bearer ${token}`,
Referer: ...FAKE_HEADERS,
assistantId == DEFAULT_ASSISTANT_ID
? "https://chatglm.cn/main/alltoolsdetail"
: `https://chatglm.cn/main/gdetail/${assistantId}`,
"X-Device-Id": util.uuid(false), "X-Device-Id": util.uuid(false),
"X-Request-Id": util.uuid(false), "X-Request-Id": util.uuid(false),
...FAKE_HEADERS, "X-Sign": sign.sign,
"X-Timestamp": sign.timestamp,
"X-Nonce": sign.nonce,
}, },
// 120秒超时 // 120秒超时
timeout: 120000, timeout: 120000,
@ -302,15 +352,22 @@ async function createCompletionStream(
// 如果引用对话ID不正确则重置引用 // 如果引用对话ID不正确则重置引用
if (!/[0-9a-zA-Z]{24}/.test(refConvId)) refConvId = ""; if (!/[0-9a-zA-Z]{24}/.test(refConvId)) refConvId = "";
let assistantId = /^[a-z0-9]{24,}$/.test(model) ? model : undefined; let assistantId = /^[a-z0-9]{24,}$/.test(model) ? model : DEFAULT_ASSISTANT_ID;
let chatMode = '';
if(model.indexOf('think') != -1 || model.indexOf('zero') != -1) { if(model.indexOf('think') != -1 || model.indexOf('zero') != -1) {
assistantId = ZERO_ASSISTANT_ID; chatMode = 'zero';
logger.info('使用思考模型'); logger.info('使用【推理】模型');
}
if(model.indexOf('deepresearch') != -1) {
chatMode = 'deep_research';
logger.info('使用【沉思DeepResearch】模型');
} }
// 请求流 // 请求流
const token = await acquireToken(refreshToken); const token = await acquireToken(refreshToken);
const sign = await generateSign();
const result = await axios.post( const result = await axios.post(
`https://chatglm.cn/chatglm/backend-api/assistant/stream`, `https://chatglm.cn/chatglm/backend-api/assistant/stream`,
{ {
@ -319,9 +376,11 @@ async function createCompletionStream(
messages: messagesPrepare(messages, refs, !!refConvId), messages: messagesPrepare(messages, refs, !!refConvId),
meta_data: { meta_data: {
channel: "", channel: "",
chat_mode: chatMode || undefined,
draft_id: "", draft_id: "",
if_plus_model: true, if_plus_model: true,
input_question_type: "xxxx", input_question_type: "xxxx",
is_networking: true,
is_test: false, is_test: false,
platform: "pc", platform: "pc",
quote_log_id: "" quote_log_id: ""
@ -336,6 +395,9 @@ async function createCompletionStream(
: `https://chatglm.cn/main/gdetail/${assistantId}`, : `https://chatglm.cn/main/gdetail/${assistantId}`,
"X-Device-Id": util.uuid(false), "X-Device-Id": util.uuid(false),
"X-Request-Id": util.uuid(false), "X-Request-Id": util.uuid(false),
"X-Sign": sign.sign,
"X-Timestamp": sign.timestamp,
"X-Nonce": sign.nonce,
...FAKE_HEADERS, ...FAKE_HEADERS,
}, },
// 120秒超时 // 120秒超时
@ -420,6 +482,7 @@ async function generateImages(
]; ];
// 请求流 // 请求流
const token = await acquireToken(refreshToken); const token = await acquireToken(refreshToken);
const sign = await generateSign();
const result = await axios.post( const result = await axios.post(
"https://chatglm.cn/chatglm/backend-api/assistant/stream", "https://chatglm.cn/chatglm/backend-api/assistant/stream",
{ {
@ -442,6 +505,9 @@ async function generateImages(
Referer: `https://chatglm.cn/main/gdetail/${model}`, Referer: `https://chatglm.cn/main/gdetail/${model}`,
"X-Device-Id": util.uuid(false), "X-Device-Id": util.uuid(false),
"X-Request-Id": util.uuid(false), "X-Request-Id": util.uuid(false),
"X-Sign": sign.sign,
"X-Timestamp": sign.timestamp,
"X-Nonce": sign.nonce,
...FAKE_HEADERS, ...FAKE_HEADERS,
}, },
// 120秒超时 // 120秒超时
@ -522,6 +588,7 @@ async function generateVideos(
// 发起生成请求 // 发起生成请求
let token = await acquireToken(refreshToken); let token = await acquireToken(refreshToken);
const sign = await generateSign();
const result = await axios.post( const result = await axios.post(
`https://chatglm.cn/chatglm/video-api/v1/chat`, `https://chatglm.cn/chatglm/video-api/v1/chat`,
{ {
@ -540,6 +607,9 @@ async function generateVideos(
Referer: "https://chatglm.cn/video", Referer: "https://chatglm.cn/video",
"X-Device-Id": util.uuid(false), "X-Device-Id": util.uuid(false),
"X-Request-Id": util.uuid(false), "X-Request-Id": util.uuid(false),
"X-Sign": sign.sign,
"X-Timestamp": sign.timestamp,
"X-Nonce": sign.nonce,
...FAKE_HEADERS, ...FAKE_HEADERS,
}, },
// 30秒超时 // 30秒超时
@ -557,6 +627,7 @@ async function generateVideos(
if (util.unixTimestamp() - startTime > 600) if (util.unixTimestamp() - startTime > 600)
throw new APIException(EX.API_VIDEO_GENERATION_FAILED); throw new APIException(EX.API_VIDEO_GENERATION_FAILED);
const token = await acquireToken(refreshToken); const token = await acquireToken(refreshToken);
const sign = await generateSign();
const result = await axios.get( const result = await axios.get(
`https://chatglm.cn/chatglm/video-api/v1/chat/status/${chatId}`, `https://chatglm.cn/chatglm/video-api/v1/chat/status/${chatId}`,
{ {
@ -565,6 +636,9 @@ async function generateVideos(
Referer: "https://chatglm.cn/video", Referer: "https://chatglm.cn/video",
"X-Device-Id": util.uuid(false), "X-Device-Id": util.uuid(false),
"X-Request-Id": util.uuid(false), "X-Request-Id": util.uuid(false),
"X-Sign": sign.sign,
"X-Timestamp": sign.timestamp,
"X-Nonce": sign.nonce,
...FAKE_HEADERS, ...FAKE_HEADERS,
}, },
// 30秒超时 // 30秒超时
@ -589,6 +663,7 @@ async function generateVideos(
if (options.audioId) { if (options.audioId) {
const [key, id] = options.audioId.split("-"); const [key, id] = options.audioId.split("-");
const token = await acquireToken(refreshToken); const token = await acquireToken(refreshToken);
const sign = await generateSign();
const result = await axios.post( const result = await axios.post(
`https://chatglm.cn/chatglm/video-api/v1/static/composite_video`, `https://chatglm.cn/chatglm/video-api/v1/static/composite_video`,
{ {
@ -602,6 +677,9 @@ async function generateVideos(
Referer: "https://chatglm.cn/video", Referer: "https://chatglm.cn/video",
"X-Device-Id": util.uuid(false), "X-Device-Id": util.uuid(false),
"X-Request-Id": util.uuid(false), "X-Request-Id": util.uuid(false),
"X-Sign": sign.sign,
"X-Timestamp": sign.timestamp,
"X-Nonce": sign.nonce,
...FAKE_HEADERS, ...FAKE_HEADERS,
}, },
// 30秒超时 // 30秒超时
@ -923,6 +1001,9 @@ function checkResult(result: AxiosResponse, refreshToken: string) {
if (!_.isFinite(code) && !_.isFinite(status)) return result.data; if (!_.isFinite(code) && !_.isFinite(status)) return result.data;
if (code === 0 || status === 0) return result.data; if (code === 0 || status === 0) return result.data;
if (code == 401) accessTokenMap.delete(refreshToken); if (code == 401) accessTokenMap.delete(refreshToken);
if (message.includes('40102')) {
throw new APIException(EX.API_REQUEST_FAILED, `[请求glm失败]: 您的refresh_token已过期请重新登录获取`);
}
throw new APIException(EX.API_REQUEST_FAILED, `[请求glm失败]: ${message}`); throw new APIException(EX.API_REQUEST_FAILED, `[请求glm失败]: ${message}`);
} }
@ -950,7 +1031,9 @@ async function receiveStream(model: string, stream: any): Promise<any> {
created: util.unixTimestamp(), created: util.unixTimestamp(),
}; };
const isSilentModel = model.indexOf('silent') != -1; const isSilentModel = model.indexOf('silent') != -1;
const isThinkModel = model.indexOf('think') != -1 || model.indexOf('zero') != -1;
let thinkingText = ""; let thinkingText = "";
let thinking = false;
let toolCall = false; let toolCall = false;
let codeGenerating = false; let codeGenerating = false;
let textChunkLength = 0; let textChunkLength = 0;
@ -977,6 +1060,7 @@ async function receiveStream(model: string, stream: any): Promise<any> {
status: partStatus, status: partStatus,
type, type,
text, text,
think,
image, image,
code, code,
content, content,
@ -995,13 +1079,22 @@ async function receiveStream(model: string, stream: any): Promise<any> {
} }
if (partStatus == "finish") textChunkLength = text.length; if (partStatus == "finish") textChunkLength = text.length;
return innerStr + text; return innerStr + text;
} else if (type == "text_thinking" && !isSilentModel) { } else if (type == "think" && isThinkModel && !isSilentModel) {
if (toolCall) { if (toolCall) {
innerStr += "\n"; innerStr += "\n";
textOffset++; textOffset++;
toolCall = false; toolCall = false;
} }
thinkingText = text; if (partStatus == "finish") textChunkLength = think.length;
thinkingText += think.substring(thinkingText.length, think.length);
return innerStr;
} else if (type == "think" && !isSilentModel) {
if (toolCall) {
innerStr += "\n";
textOffset++;
toolCall = false;
}
thinkingText += text;
return innerStr; return innerStr;
}else if ( }else if (
type == "quote_result" && type == "quote_result" &&
@ -1072,7 +1165,7 @@ async function receiveStream(model: string, stream: any): Promise<any> {
data.choices[0].message.content += chunk; data.choices[0].message.content += chunk;
} else { } else {
if(thinkingText) if(thinkingText)
data.choices[0].message.content = `[思考开始]\n${thinkingText}[思考结束]\n\n${data.choices[0].message.content}`; data.choices[0].message.content = `<think>\n${thinkingText}</think>\n\n${data.choices[0].message.content}`;
data.choices[0].message.content = data.choices[0].message.content =
data.choices[0].message.content.replace( data.choices[0].message.content.replace(
/【\d+†(来源|源|source)】/g, /【\d+†(来源|源|source)】/g,
@ -1110,6 +1203,7 @@ function createTransStream(model: string, stream: any, endCallback?: Function) {
// 创建转换流 // 创建转换流
const transStream = new PassThrough(); const transStream = new PassThrough();
const isSilentModel = model.indexOf('silent') != -1; const isSilentModel = model.indexOf('silent') != -1;
const isThinkModel = model.indexOf('think') != -1 || model.indexOf('zero') != -1;
let content = ""; let content = "";
let thinking = false; let thinking = false;
let toolCall = false; let toolCall = false;
@ -1151,6 +1245,7 @@ function createTransStream(model: string, stream: any, endCallback?: Function) {
status: partStatus, status: partStatus,
type, type,
text, text,
think,
image, image,
code, code,
content, content,
@ -1162,8 +1257,8 @@ function createTransStream(model: string, stream: any, endCallback?: Function) {
} }
if (type == "text") { if (type == "text") {
if(thinking) { if(thinking) {
innerStr += "[思考结束]\n\n" innerStr += "</think>\n\n"
textOffset = thinkingText.length + 8; textOffset += thinkingText.length + 8;
thinking = false; thinking = false;
} }
if (toolCall) { if (toolCall) {
@ -1173,10 +1268,10 @@ function createTransStream(model: string, stream: any, endCallback?: Function) {
} }
if (partStatus == "finish") textChunkLength = text.length; if (partStatus == "finish") textChunkLength = text.length;
return innerStr + text; return innerStr + text;
} else if (type == "text_thinking" && !isSilentModel) { } else if (type == "think" && isThinkModel && !isSilentModel) {
if(!thinking) { if(!thinking) {
innerStr += "[思考开始]\n"; innerStr += "<think>\n";
textOffset = 7; textOffset += 7;
thinking = true; thinking = true;
} }
if (toolCall) { if (toolCall) {
@ -1184,9 +1279,18 @@ function createTransStream(model: string, stream: any, endCallback?: Function) {
textOffset++; textOffset++;
toolCall = false; toolCall = false;
} }
if (partStatus == "finish") textChunkLength = text.length; if (partStatus == "finish") textChunkLength = think.length;
thinkingText += text.substring(thinkingText.length, text.length); thinkingText += think.substring(thinkingText.length, think.length);
return innerStr + text; return innerStr + thinkingText;
} else if (type == "think" && !isSilentModel) {
if (toolCall) {
innerStr += "\n";
textOffset++;
toolCall = false;
}
if (partStatus == "finish") textChunkLength = thinkingText.length;
thinkingText += think;
return innerStr + thinkingText;
} else if ( } else if (
type == "quote_result" && type == "quote_result" &&
status == "finish" && status == "finish" &&
@ -1196,9 +1300,9 @@ function createTransStream(model: string, stream: any, endCallback?: Function) {
) { ) {
const searchText = const searchText =
meta_data.metadata_list.reduce( meta_data.metadata_list.reduce(
(meta, v) => meta + `检索 ${v.title}(${v.url}) ...`, (meta, v) => meta + `检索 ${v.title}(${v.url}) ...\n`,
"" ""
) + "\n"; );
textOffset += searchText.length; textOffset += searchText.length;
toolCall = true; toolCall = true;
return innerStr + searchText; return innerStr + searchText;
@ -1388,41 +1492,23 @@ function tokenSplit(authorization: string) {
return authorization.replace("Bearer ", "").split(","); return authorization.replace("Bearer ", "").split(",");
} }
/**
* cookie
*
*
*
* @param refreshToken
* @param token
*/
function generateCookie(refreshToken: string, token: string) {
const timestamp = util.unixTimestamp();
const gsTimestamp = timestamp - Math.round(Math.random() * 2592000);
return {
chatglm_refresh_token: refreshToken,
// chatglm_user_id: '',
_ga_PMD05MS2V9: `GS1.1.${gsTimestamp}.18.0.${gsTimestamp}.0.0.0`,
chatglm_token: token,
chatglm_token_expires: util.getDateString("yyyy-MM-dd HH:mm:ss"),
abtestid: "a",
// acw_tc: ''
};
}
/** /**
* Token存活状态 * Token存活状态
*/ */
async function getTokenLiveStatus(refreshToken: string) { async function getTokenLiveStatus(refreshToken: string) {
const sign = await generateSign();
const result = await axios.post( const result = await axios.post(
"https://chatglm.cn/chatglm/backend-api/v1/user/refresh", "https://chatglm.cn/chatglm/user-api/user/refresh",
{}, undefined,
{ {
headers: { headers: {
Authorization: `Bearer ${refreshToken}`, Authorization: `Bearer ${refreshToken}`,
Referer: "https://chatglm.cn/main/alltoolsdetail", Referer: "https://chatglm.cn/main/alltoolsdetail",
"X-Device-Id": util.uuid(false), "X-Device-Id": util.uuid(false),
"X-Request-Id": util.uuid(false), "X-Request-Id": util.uuid(false),
"X-Sign": sign.sign,
"X-Timestamp": sign.timestamp,
"X-Nonce": sign.nonce,
...FAKE_HEADERS, ...FAKE_HEADERS,
}, },
timeout: 15000, timeout: 15000,

View File

@ -3,10 +3,6 @@ import _ from 'lodash';
import Request from '@/lib/request/Request.ts'; import Request from '@/lib/request/Request.ts';
import Response from '@/lib/response/Response.ts'; import Response from '@/lib/response/Response.ts';
import chat from '@/api/controllers/chat.ts'; import chat from '@/api/controllers/chat.ts';
import logger from '@/lib/logger.ts';
// zero推理模型智能体ID
const ZERO_ASSISTANT_ID = "676411c38945bbc58a905d31";
export default { export default {