mirror of
https://github.com/LLM-Red-Team/glm-free-api.git
synced 2025-04-29 15:59:58 +08:00
Compare commits
No commits in common. "master" and "0.0.33" have entirely different histories.
82
README.md
82
README.md
@ -9,11 +9,11 @@
|
|||||||

|

|
||||||

|

|
||||||
|
|
||||||
支持GLM-4-Plus高速流式输出、支持多轮对话、支持智能体对话、支持沉思模型、支持Zero思考推理模型、支持视频生成、支持AI绘图、支持联网搜索、支持长文档解读、支持图像解析,零配置部署,多路token支持,自动清理会话痕迹。
|
支持高速流式输出、支持多轮对话、支持智能体对话、支持视频生成、支持AI绘图、支持联网搜索、支持长文档解读、支持图像解析,零配置部署,多路token支持,自动清理会话痕迹。
|
||||||
|
|
||||||
与ChatGPT接口完全兼容。
|
与ChatGPT接口完全兼容。
|
||||||
|
|
||||||
还有以下十个free-api欢迎关注:
|
还有以下九个free-api欢迎关注:
|
||||||
|
|
||||||
Moonshot AI(Kimi.ai)接口转API [kimi-free-api](https://github.com/LLM-Red-Team/kimi-free-api)
|
Moonshot AI(Kimi.ai)接口转API [kimi-free-api](https://github.com/LLM-Red-Team/kimi-free-api)
|
||||||
|
|
||||||
@ -25,8 +25,6 @@ Moonshot AI(Kimi.ai)接口转API [kimi-free-api](https://github.com/LLM-Red-
|
|||||||
|
|
||||||
字节跳动(豆包)接口转API [doubao-free-api](https://github.com/LLM-Red-Team/doubao-free-api)
|
字节跳动(豆包)接口转API [doubao-free-api](https://github.com/LLM-Red-Team/doubao-free-api)
|
||||||
|
|
||||||
字节跳动(即梦AI)接口转API [jimeng-free-api](https://github.com/LLM-Red-Team/jimeng-free-api)
|
|
||||||
|
|
||||||
讯飞星火(Spark)接口转API [spark-free-api](https://github.com/LLM-Red-Team/spark-free-api)
|
讯飞星火(Spark)接口转API [spark-free-api](https://github.com/LLM-Red-Team/spark-free-api)
|
||||||
|
|
||||||
MiniMax(海螺AI)接口转API [hailuo-free-api](https://github.com/LLM-Red-Team/hailuo-free-api)
|
MiniMax(海螺AI)接口转API [hailuo-free-api](https://github.com/LLM-Red-Team/hailuo-free-api)
|
||||||
@ -37,40 +35,29 @@ MiniMax(海螺AI)接口转API [hailuo-free-api](https://github.com/LLM-Red-T
|
|||||||
|
|
||||||
## 目录
|
## 目录
|
||||||
|
|
||||||
- [GLM AI Free 服务](#glm-ai-free-服务)
|
* [免责声明](#免责声明)
|
||||||
- [目录](#目录)
|
* [在线体验](#在线体验)
|
||||||
- [免责声明](#免责声明)
|
* [效果示例](#效果示例)
|
||||||
- [效果示例](#效果示例)
|
* [接入准备](#接入准备)
|
||||||
- [验明正身Demo](#验明正身demo)
|
* [智能体接入](#智能体接入)
|
||||||
- [智能体对话Demo](#智能体对话demo)
|
* [多账号接入](#多账号接入)
|
||||||
- [结合Dify工作流Demo](#结合dify工作流demo)
|
* [Docker部署](#Docker部署)
|
||||||
- [多轮对话Demo](#多轮对话demo)
|
* [Docker-compose部署](#Docker-compose部署)
|
||||||
- [视频生成Demo](#视频生成demo)
|
* [Render部署](#Render部署)
|
||||||
- [AI绘图Demo](#ai绘图demo)
|
* [Vercel部署](#Vercel部署)
|
||||||
- [联网搜索Demo](#联网搜索demo)
|
* [原生部署](#原生部署)
|
||||||
- [长文档解读Demo](#长文档解读demo)
|
* [推荐使用客户端](#推荐使用客户端)
|
||||||
- [代码调用Demo](#代码调用demo)
|
* [接口列表](#接口列表)
|
||||||
- [图像解析Demo](#图像解析demo)
|
* [对话补全](#对话补全)
|
||||||
- [接入准备](#接入准备)
|
* [视频生成](#视频生成)
|
||||||
- [智能体接入](#智能体接入)
|
* [AI绘图](#AI绘图)
|
||||||
- [多账号接入](#多账号接入)
|
* [文档解读](#文档解读)
|
||||||
- [Docker部署](#docker部署)
|
* [图像解析](#图像解析)
|
||||||
- [Docker-compose部署](#docker-compose部署)
|
* [refresh_token存活检测](#refresh_token存活检测)
|
||||||
- [Render部署](#render部署)
|
* [注意事项](#注意事项)
|
||||||
- [Vercel部署](#vercel部署)
|
* [Nginx反代优化](#Nginx反代优化)
|
||||||
- [原生部署](#原生部署)
|
* [Token统计](#Token统计)
|
||||||
- [推荐使用客户端](#推荐使用客户端)
|
* [Star History](#star-history)
|
||||||
- [接口列表](#接口列表)
|
|
||||||
- [对话补全](#对话补全)
|
|
||||||
- [视频生成](#视频生成)
|
|
||||||
- [AI绘图](#ai绘图)
|
|
||||||
- [文档解读](#文档解读)
|
|
||||||
- [图像解析](#图像解析)
|
|
||||||
- [refresh\_token存活检测](#refresh_token存活检测)
|
|
||||||
- [注意事项](#注意事项)
|
|
||||||
- [Nginx反代优化](#nginx反代优化)
|
|
||||||
- [Token统计](#token统计)
|
|
||||||
- [Star History](#star-history)
|
|
||||||
|
|
||||||
## 免责声明
|
## 免责声明
|
||||||
|
|
||||||
@ -84,6 +71,12 @@ MiniMax(海螺AI)接口转API [hailuo-free-api](https://github.com/LLM-Red-T
|
|||||||
|
|
||||||
**仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!**
|
**仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!**
|
||||||
|
|
||||||
|
## 在线体验
|
||||||
|
|
||||||
|
此链接仅临时测试功能,只有一路并发,如果遇到异常请稍后重试,建议自行部署使用。
|
||||||
|
|
||||||
|
https://udify.app/chat/Pe89TtaX3rKXM8NS
|
||||||
|
|
||||||
## 效果示例
|
## 效果示例
|
||||||
|
|
||||||
### 验明正身Demo
|
### 验明正身Demo
|
||||||
@ -298,11 +291,8 @@ Authorization: Bearer [refresh_token]
|
|||||||
请求数据:
|
请求数据:
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
// 默认模型:glm-4-plus
|
// 如果使用智能体请填写智能体ID到此处,否则可以乱填
|
||||||
// zero思考推理模型:glm-4-zero / glm-4-think
|
"model": "glm4",
|
||||||
// 沉思模型:glm-4-deepresearch
|
|
||||||
// 如果使用智能体请填写智能体ID到此处
|
|
||||||
"model": "glm-4-plus",
|
|
||||||
// 目前多轮对话基于消息合并实现,某些场景可能导致能力下降且受单轮最大token数限制
|
// 目前多轮对话基于消息合并实现,某些场景可能导致能力下降且受单轮最大token数限制
|
||||||
// 如果您想获得原生的多轮对话体验,可以传入首轮消息获得的id,来接续上下文
|
// 如果您想获得原生的多轮对话体验,可以传入首轮消息获得的id,来接续上下文
|
||||||
// "conversation_id": "65f6c28546bae1f0fbb532de",
|
// "conversation_id": "65f6c28546bae1f0fbb532de",
|
||||||
@ -322,7 +312,7 @@ Authorization: Bearer [refresh_token]
|
|||||||
{
|
{
|
||||||
// 如果想获得原生多轮对话体验,此id,你可以传入到下一轮对话的conversation_id来接续上下文
|
// 如果想获得原生多轮对话体验,此id,你可以传入到下一轮对话的conversation_id来接续上下文
|
||||||
"id": "65f6c28546bae1f0fbb532de",
|
"id": "65f6c28546bae1f0fbb532de",
|
||||||
"model": "glm-4",
|
"model": "glm4",
|
||||||
"object": "chat.completion",
|
"object": "chat.completion",
|
||||||
"choices": [
|
"choices": [
|
||||||
{
|
{
|
||||||
@ -447,7 +437,7 @@ Authorization: Bearer [refresh_token]
|
|||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
// 如果使用智能体请填写智能体ID到此处,否则可以乱填
|
// 如果使用智能体请填写智能体ID到此处,否则可以乱填
|
||||||
"model": "glm-4",
|
"model": "glm4",
|
||||||
"messages": [
|
"messages": [
|
||||||
{
|
{
|
||||||
"role": "user",
|
"role": "user",
|
||||||
@ -474,7 +464,7 @@ Authorization: Bearer [refresh_token]
|
|||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"id": "cnmuo7mcp7f9hjcmihn0",
|
"id": "cnmuo7mcp7f9hjcmihn0",
|
||||||
"model": "glm-4",
|
"model": "glm4",
|
||||||
"object": "chat.completion",
|
"object": "chat.completion",
|
||||||
"choices": [
|
"choices": [
|
||||||
{
|
{
|
||||||
|
73
README_EN.md
73
README_EN.md
@ -5,7 +5,7 @@
|
|||||||

|

|
||||||

|

|
||||||
|
|
||||||
Supports high-speed streaming output, multi-turn dialogues, internet search, long document reading, image analysis, deepresearch, zero-configuration deployment, multi-token support, and automatic session trace cleanup.
|
Supports high-speed streaming output, multi-turn dialogues, internet search, long document reading, image analysis, zero-configuration deployment, multi-token support, and automatic session trace cleanup.
|
||||||
|
|
||||||
Fully compatible with the ChatGPT interface.
|
Fully compatible with the ChatGPT interface.
|
||||||
|
|
||||||
@ -33,41 +33,29 @@ Lingxin Intelligence (Emohaa) API to API [emohaa-free-api](https://github.com/LL
|
|||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
- [GLM AI Free Service](#glm-ai-free-service)
|
* [Announcement](#Announcement)
|
||||||
- [Table of Contents](#table-of-contents)
|
* [Online Experience](#Online-Experience)
|
||||||
- [Announcement](#announcement)
|
* [Effect Examples](#Effect-Examples)
|
||||||
- [Online Experience](#online-experience)
|
* [Access Preparation](#Access-Preparation)
|
||||||
- [Effect Examples](#effect-examples)
|
* [Agent Access](#Agent-Access)
|
||||||
- [Identity Verification](#identity-verification)
|
* [Multiple Account Access](#Multiple-Account-Access)
|
||||||
- [AI-Agent](#ai-agent)
|
* [Docker Deployment](#Docker-Deployment)
|
||||||
- [Combined with Dify workflow](#combined-with-dify-workflow)
|
* [Docker-compose Deployment](#Docker-compose-Deployment)
|
||||||
- [Multi-turn Dialogue](#multi-turn-dialogue)
|
* [Render Deployment](#Render-Deployment)
|
||||||
- [Video Generation](#video-generation)
|
* [Vercel Deployment](#Vercel-Deployment)
|
||||||
- [AI Drawing](#ai-drawing)
|
* [Native Deployment](#Native-Deployment)
|
||||||
- [Internet Search](#internet-search)
|
* [Recommended Clients](#Recommended-Clients)
|
||||||
- [Long Document Reading](#long-document-reading)
|
* [Interface List](#Interface-List)
|
||||||
- [Using Code](#using-code)
|
* [Conversation Completion](#Conversation-Completion)
|
||||||
- [Image Analysis](#image-analysis)
|
* [Video Generation](#Video-Generation)
|
||||||
- [Access Preparation](#access-preparation)
|
* [AI Drawing](#AI-Drawing)
|
||||||
- [Agent Access](#agent-access)
|
* [Document Interpretation](#Document-Interpretation)
|
||||||
- [Multiple Account Access](#multiple-account-access)
|
* [Image Analysis](#Image-Analysis)
|
||||||
- [Docker Deployment](#docker-deployment)
|
* [Refresh_token Survival Detection](#Refresh_token-Survival-Detection)
|
||||||
- [Docker-compose Deployment](#docker-compose-deployment)
|
* [Notification](#Notification)
|
||||||
- [Render Deployment](#render-deployment)
|
* [Nginx Anti-generation Optimization](#Nginx-Anti-generation-Optimization)
|
||||||
- [Vercel Deployment](#vercel-deployment)
|
* [Token Statistics](#Token-Statistics)
|
||||||
- [Native Deployment](#native-deployment)
|
* [Star History](#star-history)
|
||||||
- [Recommended Clients](#recommended-clients)
|
|
||||||
- [interface List](#interface-list)
|
|
||||||
- [Conversation Completion](#conversation-completion)
|
|
||||||
- [Video Generation](#video-generation-1)
|
|
||||||
- [AI Drawing](#ai-drawing-1)
|
|
||||||
- [Document Interpretation](#document-interpretation)
|
|
||||||
- [Image Analysis](#image-analysis-1)
|
|
||||||
- [Refresh\_token Survival Detection](#refresh_token-survival-detection)
|
|
||||||
- [Notification](#notification)
|
|
||||||
- [Nginx Anti-generation Optimization](#nginx-anti-generation-optimization)
|
|
||||||
- [Token Statistics](#token-statistics)
|
|
||||||
- [Star History](#star-history)
|
|
||||||
|
|
||||||
## Announcement
|
## Announcement
|
||||||
|
|
||||||
@ -301,11 +289,8 @@ Authorization: Bearer [refresh_token]
|
|||||||
Request data:
|
Request data:
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
// Default model: glm-4-plus
|
// Except using the Agent to fill the ID, fill in the model name as you like.
|
||||||
// zero thinking model: glm-4-zero / glm-4-think
|
"model": "glm4",
|
||||||
// deepresearch model: glm-4-deepresearch
|
|
||||||
// If using the Agent, fill in the Agent ID here
|
|
||||||
"model": "glm-4",
|
|
||||||
// Currently, multi-round conversations are realized based on message merging, which in some scenarios may lead to capacity degradation and is limited by the maximum number of tokens in a single round.
|
// Currently, multi-round conversations are realized based on message merging, which in some scenarios may lead to capacity degradation and is limited by the maximum number of tokens in a single round.
|
||||||
// If you want a native multi-round dialog experience, you can pass in the ids obtained from the last round of messages to pick up the context
|
// If you want a native multi-round dialog experience, you can pass in the ids obtained from the last round of messages to pick up the context
|
||||||
// "conversation_id": "65f6c28546bae1f0fbb532de",
|
// "conversation_id": "65f6c28546bae1f0fbb532de",
|
||||||
@ -324,7 +309,7 @@ Response data:
|
|||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"id": "65f6c28546bae1f0fbb532de",
|
"id": "65f6c28546bae1f0fbb532de",
|
||||||
"model": "glm-4",
|
"model": "glm4",
|
||||||
"object": "chat.completion",
|
"object": "chat.completion",
|
||||||
"choices": [
|
"choices": [
|
||||||
{
|
{
|
||||||
@ -449,7 +434,7 @@ Request data:
|
|||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
// 如果使用智能体请填写智能体ID到此处,否则可以乱填
|
// 如果使用智能体请填写智能体ID到此处,否则可以乱填
|
||||||
"model": "glm-4",
|
"model": "glm4",
|
||||||
"messages": [
|
"messages": [
|
||||||
{
|
{
|
||||||
"role": "user",
|
"role": "user",
|
||||||
@ -476,7 +461,7 @@ Response data:
|
|||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"id": "cnmuo7mcp7f9hjcmihn0",
|
"id": "cnmuo7mcp7f9hjcmihn0",
|
||||||
"model": "glm-4",
|
"model": "glm4",
|
||||||
"object": "chat.completion",
|
"object": "chat.completion",
|
||||||
"choices": [
|
"choices": [
|
||||||
{
|
{
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "glm-free-api",
|
"name": "glm-free-api",
|
||||||
"version": "0.0.37",
|
"version": "0.0.33",
|
||||||
"description": "GLM Free API Server",
|
"description": "GLM Free API Server",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"main": "dist/index.js",
|
"main": "dist/index.js",
|
||||||
|
@ -3,6 +3,7 @@ import path from "path";
|
|||||||
import _ from "lodash";
|
import _ from "lodash";
|
||||||
import mime from "mime";
|
import mime from "mime";
|
||||||
import sharp from "sharp";
|
import sharp from "sharp";
|
||||||
|
import fs from "fs-extra";
|
||||||
import FormData from "form-data";
|
import FormData from "form-data";
|
||||||
import axios, { AxiosResponse } from "axios";
|
import axios, { AxiosResponse } from "axios";
|
||||||
|
|
||||||
@ -16,8 +17,6 @@ import util from "@/lib/util.ts";
|
|||||||
const MODEL_NAME = "glm";
|
const MODEL_NAME = "glm";
|
||||||
// 默认的智能体ID,GLM4
|
// 默认的智能体ID,GLM4
|
||||||
const DEFAULT_ASSISTANT_ID = "65940acff94777010aa6b796";
|
const DEFAULT_ASSISTANT_ID = "65940acff94777010aa6b796";
|
||||||
// 签名密钥(官网变化记得更新)
|
|
||||||
const SIGN_SECRET = "8a1317a7468aa3ad86e997d08f3f31cb";
|
|
||||||
// access_token有效期
|
// access_token有效期
|
||||||
const ACCESS_TOKEN_EXPIRES = 3600;
|
const ACCESS_TOKEN_EXPIRES = 3600;
|
||||||
// 最大重试次数
|
// 最大重试次数
|
||||||
@ -26,28 +25,17 @@ const MAX_RETRY_COUNT = 3;
|
|||||||
const RETRY_DELAY = 5000;
|
const RETRY_DELAY = 5000;
|
||||||
// 伪装headers
|
// 伪装headers
|
||||||
const FAKE_HEADERS = {
|
const FAKE_HEADERS = {
|
||||||
"Accept": "application/json, text/plain, */*",
|
Accept: "*/*",
|
||||||
"Accept-Encoding": "gzip, deflate, br, zstd",
|
|
||||||
"Accept-Language": "zh-CN,zh;q=0.9,en;q=0.8",
|
|
||||||
"Cache-Control": "no-cache",
|
|
||||||
"App-Name": "chatglm",
|
"App-Name": "chatglm",
|
||||||
"Origin": "https://chatglm.cn",
|
Platform: "pc",
|
||||||
"Pragma": "no-cache",
|
Origin: "https://chatglm.cn",
|
||||||
"sec-ch-ua":
|
"Sec-Ch-Ua":
|
||||||
'"Chromium";v="134", "Not:A-Brand";v="24", "Google Chrome";v="134"',
|
'"Chromium";v="122", "Not(A:Brand";v="24", "Google Chrome";v="122"',
|
||||||
"sec-ch-ua-mobile": "?0",
|
"Sec-Ch-Ua-Mobile": "?0",
|
||||||
"sec-ch-ua-platform": '"macOS"',
|
"Sec-Ch-Ua-Platform": '"Windows"',
|
||||||
"Sec-Fetch-Dest": "empty",
|
|
||||||
"Sec-Fetch-Mode": "cors",
|
|
||||||
"Sec-Fetch-Site": "same-origin",
|
|
||||||
"User-Agent":
|
"User-Agent":
|
||||||
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36",
|
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36",
|
||||||
'X-App-Platform': 'pc',
|
Version: "0.0.1",
|
||||||
'X-App-Version': '0.0.1',
|
|
||||||
'X-Device-Brand': '',
|
|
||||||
'X-Device-Model': '',
|
|
||||||
'X-Exp-Groups': 'na_android_config:exp:NA,na_4o_config:exp:4o_A,na_glm4plus_config:exp:open,mainchat_server_app:exp:A,mobile_history_daycheck:exp:a,desktop_toolbar:exp:A,chat_drawing_server:exp:A,drawing_server_cogview:exp:cogview4,app_welcome_v2:exp:B,chat_drawing_streamv2:exp:A,mainchat_rm_fc:exp:add,mainchat_dr:exp:open,chat_auto_entrance:exp:A',
|
|
||||||
'X-Lang': 'zh'
|
|
||||||
};
|
};
|
||||||
// 文件最大大小
|
// 文件最大大小
|
||||||
const FILE_MAX_SIZE = 100 * 1024 * 1024;
|
const FILE_MAX_SIZE = 100 * 1024 * 1024;
|
||||||
@ -56,29 +44,6 @@ const accessTokenMap = new Map();
|
|||||||
// access_token请求队列映射
|
// access_token请求队列映射
|
||||||
const accessTokenRequestQueueMap: Record<string, Function[]> = {};
|
const accessTokenRequestQueueMap: Record<string, Function[]> = {};
|
||||||
|
|
||||||
/**
|
|
||||||
* 生成sign
|
|
||||||
*/
|
|
||||||
async function generateSign() {
|
|
||||||
// 智谱的时间戳算法(官网变化记得更新)
|
|
||||||
const e = Date.now()
|
|
||||||
, A = e.toString()
|
|
||||||
, t = A.length
|
|
||||||
, o = A.split("").map((e => Number(e)))
|
|
||||||
, i = o.reduce(( (e, A) => e + A), 0) - o[t - 2]
|
|
||||||
, a = i % 10;
|
|
||||||
const timestamp = A.substring(0, t - 2) + a + A.substring(t - 1, t);
|
|
||||||
// 随机UUID
|
|
||||||
const nonce = util.uuid(false);
|
|
||||||
// 签名
|
|
||||||
const sign = util.md5(`${timestamp}-${nonce}-${SIGN_SECRET}`);
|
|
||||||
return {
|
|
||||||
timestamp,
|
|
||||||
nonce,
|
|
||||||
sign
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* 请求access_token
|
* 请求access_token
|
||||||
*
|
*
|
||||||
@ -94,32 +59,26 @@ async function requestToken(refreshToken: string) {
|
|||||||
accessTokenRequestQueueMap[refreshToken] = [];
|
accessTokenRequestQueueMap[refreshToken] = [];
|
||||||
logger.info(`Refresh token: ${refreshToken}`);
|
logger.info(`Refresh token: ${refreshToken}`);
|
||||||
const result = await (async () => {
|
const result = await (async () => {
|
||||||
// 生成sign
|
|
||||||
const sign = await generateSign();
|
|
||||||
const result = await axios.post(
|
const result = await axios.post(
|
||||||
"https://chatglm.cn/chatglm/user-api/user/refresh",
|
"https://chatglm.cn/chatglm/backend-api/v1/user/refresh",
|
||||||
{},
|
{},
|
||||||
{
|
{
|
||||||
headers: {
|
headers: {
|
||||||
// Referer: "https://chatglm.cn/main/alltoolsdetail",
|
|
||||||
Authorization: `Bearer ${refreshToken}`,
|
Authorization: `Bearer ${refreshToken}`,
|
||||||
"Content-Type": "application/json",
|
Referer: "https://chatglm.cn/main/alltoolsdetail",
|
||||||
...FAKE_HEADERS,
|
|
||||||
"X-Device-Id": util.uuid(false),
|
"X-Device-Id": util.uuid(false),
|
||||||
"X-Nonce": sign.nonce,
|
|
||||||
"X-Request-Id": util.uuid(false),
|
"X-Request-Id": util.uuid(false),
|
||||||
"X-Sign": sign.sign,
|
...FAKE_HEADERS,
|
||||||
"X-Timestamp": `${sign.timestamp}`,
|
|
||||||
},
|
},
|
||||||
timeout: 15000,
|
timeout: 15000,
|
||||||
validateStatus: () => true,
|
validateStatus: () => true,
|
||||||
}
|
}
|
||||||
);
|
);
|
||||||
const { result: _result } = checkResult(result, refreshToken);
|
const { result: _result } = checkResult(result, refreshToken);
|
||||||
const { access_token, refresh_token } = _result;
|
const { accessToken } = _result;
|
||||||
return {
|
return {
|
||||||
accessToken: access_token,
|
accessToken,
|
||||||
refreshToken: refresh_token,
|
refreshToken,
|
||||||
refreshTime: util.unixTimestamp() + ACCESS_TOKEN_EXPIRES,
|
refreshTime: util.unixTimestamp() + ACCESS_TOKEN_EXPIRES,
|
||||||
};
|
};
|
||||||
})()
|
})()
|
||||||
@ -179,7 +138,7 @@ async function removeConversation(
|
|||||||
assistantId = DEFAULT_ASSISTANT_ID
|
assistantId = DEFAULT_ASSISTANT_ID
|
||||||
) {
|
) {
|
||||||
const token = await acquireToken(refreshToken);
|
const token = await acquireToken(refreshToken);
|
||||||
const sign = await generateSign();
|
|
||||||
const result = await axios.post(
|
const result = await axios.post(
|
||||||
"https://chatglm.cn/chatglm/backend-api/assistant/conversation/delete",
|
"https://chatglm.cn/chatglm/backend-api/assistant/conversation/delete",
|
||||||
{
|
{
|
||||||
@ -192,9 +151,6 @@ async function removeConversation(
|
|||||||
Referer: `https://chatglm.cn/main/alltoolsdetail`,
|
Referer: `https://chatglm.cn/main/alltoolsdetail`,
|
||||||
"X-Device-Id": util.uuid(false),
|
"X-Device-Id": util.uuid(false),
|
||||||
"X-Request-Id": util.uuid(false),
|
"X-Request-Id": util.uuid(false),
|
||||||
"X-Sign": sign.sign,
|
|
||||||
"X-Timestamp": sign.timestamp,
|
|
||||||
"X-Nonce": sign.nonce,
|
|
||||||
...FAKE_HEADERS,
|
...FAKE_HEADERS,
|
||||||
},
|
},
|
||||||
timeout: 15000,
|
timeout: 15000,
|
||||||
@ -209,13 +165,13 @@ async function removeConversation(
|
|||||||
*
|
*
|
||||||
* @param messages 参考gpt系列消息格式,多轮对话请完整提供上下文
|
* @param messages 参考gpt系列消息格式,多轮对话请完整提供上下文
|
||||||
* @param refreshToken 用于刷新access_token的refresh_token
|
* @param refreshToken 用于刷新access_token的refresh_token
|
||||||
* @param model 智能体ID,默认使用GLM4原版
|
* @param assistantId 智能体ID,默认使用GLM4原版
|
||||||
* @param retryCount 重试次数
|
* @param retryCount 重试次数
|
||||||
*/
|
*/
|
||||||
async function createCompletion(
|
async function createCompletion(
|
||||||
messages: any[],
|
messages: any[],
|
||||||
refreshToken: string,
|
refreshToken: string,
|
||||||
model = MODEL_NAME,
|
assistantId = DEFAULT_ASSISTANT_ID,
|
||||||
refConvId = "",
|
refConvId = "",
|
||||||
retryCount = 0
|
retryCount = 0
|
||||||
) {
|
) {
|
||||||
@ -233,22 +189,8 @@ async function createCompletion(
|
|||||||
// 如果引用对话ID不正确则重置引用
|
// 如果引用对话ID不正确则重置引用
|
||||||
if (!/[0-9a-zA-Z]{24}/.test(refConvId)) refConvId = "";
|
if (!/[0-9a-zA-Z]{24}/.test(refConvId)) refConvId = "";
|
||||||
|
|
||||||
let assistantId = /^[a-z0-9]{24,}$/.test(model) ? model : DEFAULT_ASSISTANT_ID;
|
|
||||||
let chatMode = '';
|
|
||||||
|
|
||||||
if(model.indexOf('think') != -1 || model.indexOf('zero') != -1) {
|
|
||||||
chatMode = 'zero';
|
|
||||||
logger.info('使用【推理】模型');
|
|
||||||
}
|
|
||||||
|
|
||||||
if(model.indexOf('deepresearch') != -1) {
|
|
||||||
chatMode = 'deep_research';
|
|
||||||
logger.info('使用【沉思(DeepResearch)】模型');
|
|
||||||
}
|
|
||||||
|
|
||||||
// 请求流
|
// 请求流
|
||||||
const token = await acquireToken(refreshToken);
|
const token = await acquireToken(refreshToken);
|
||||||
const sign = await generateSign();
|
|
||||||
const result = await axios.post(
|
const result = await axios.post(
|
||||||
"https://chatglm.cn/chatglm/backend-api/assistant/stream",
|
"https://chatglm.cn/chatglm/backend-api/assistant/stream",
|
||||||
{
|
{
|
||||||
@ -257,25 +199,21 @@ async function createCompletion(
|
|||||||
messages: messagesPrepare(messages, refs, !!refConvId),
|
messages: messagesPrepare(messages, refs, !!refConvId),
|
||||||
meta_data: {
|
meta_data: {
|
||||||
channel: "",
|
channel: "",
|
||||||
chat_mode: chatMode || undefined,
|
|
||||||
draft_id: "",
|
draft_id: "",
|
||||||
if_plus_model: true,
|
|
||||||
input_question_type: "xxxx",
|
input_question_type: "xxxx",
|
||||||
is_networking: true,
|
|
||||||
is_test: false,
|
is_test: false,
|
||||||
platform: "pc",
|
|
||||||
quote_log_id: ""
|
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
headers: {
|
headers: {
|
||||||
Authorization: `Bearer ${token}`,
|
Authorization: `Bearer ${token}`,
|
||||||
...FAKE_HEADERS,
|
Referer:
|
||||||
|
assistantId == DEFAULT_ASSISTANT_ID
|
||||||
|
? "https://chatglm.cn/main/alltoolsdetail"
|
||||||
|
: `https://chatglm.cn/main/gdetail/${assistantId}`,
|
||||||
"X-Device-Id": util.uuid(false),
|
"X-Device-Id": util.uuid(false),
|
||||||
"X-Request-Id": util.uuid(false),
|
"X-Request-Id": util.uuid(false),
|
||||||
"X-Sign": sign.sign,
|
...FAKE_HEADERS,
|
||||||
"X-Timestamp": sign.timestamp,
|
|
||||||
"X-Nonce": sign.nonce,
|
|
||||||
},
|
},
|
||||||
// 120秒超时
|
// 120秒超时
|
||||||
timeout: 120000,
|
timeout: 120000,
|
||||||
@ -293,7 +231,7 @@ async function createCompletion(
|
|||||||
|
|
||||||
const streamStartTime = util.timestamp();
|
const streamStartTime = util.timestamp();
|
||||||
// 接收流为输出文本
|
// 接收流为输出文本
|
||||||
const answer = await receiveStream(model, result.data);
|
const answer = await receiveStream(result.data);
|
||||||
logger.success(
|
logger.success(
|
||||||
`Stream has completed transfer ${util.timestamp() - streamStartTime}ms`
|
`Stream has completed transfer ${util.timestamp() - streamStartTime}ms`
|
||||||
);
|
);
|
||||||
@ -313,7 +251,7 @@ async function createCompletion(
|
|||||||
return createCompletion(
|
return createCompletion(
|
||||||
messages,
|
messages,
|
||||||
refreshToken,
|
refreshToken,
|
||||||
model,
|
assistantId,
|
||||||
refConvId,
|
refConvId,
|
||||||
retryCount + 1
|
retryCount + 1
|
||||||
);
|
);
|
||||||
@ -328,13 +266,13 @@ async function createCompletion(
|
|||||||
*
|
*
|
||||||
* @param messages 参考gpt系列消息格式,多轮对话请完整提供上下文
|
* @param messages 参考gpt系列消息格式,多轮对话请完整提供上下文
|
||||||
* @param refreshToken 用于刷新access_token的refresh_token
|
* @param refreshToken 用于刷新access_token的refresh_token
|
||||||
* @param model 智能体ID,默认使用GLM4原版
|
* @param assistantId 智能体ID,默认使用GLM4原版
|
||||||
* @param retryCount 重试次数
|
* @param retryCount 重试次数
|
||||||
*/
|
*/
|
||||||
async function createCompletionStream(
|
async function createCompletionStream(
|
||||||
messages: any[],
|
messages: any[],
|
||||||
refreshToken: string,
|
refreshToken: string,
|
||||||
model = MODEL_NAME,
|
assistantId = DEFAULT_ASSISTANT_ID,
|
||||||
refConvId = "",
|
refConvId = "",
|
||||||
retryCount = 0
|
retryCount = 0
|
||||||
) {
|
) {
|
||||||
@ -352,22 +290,8 @@ async function createCompletionStream(
|
|||||||
// 如果引用对话ID不正确则重置引用
|
// 如果引用对话ID不正确则重置引用
|
||||||
if (!/[0-9a-zA-Z]{24}/.test(refConvId)) refConvId = "";
|
if (!/[0-9a-zA-Z]{24}/.test(refConvId)) refConvId = "";
|
||||||
|
|
||||||
let assistantId = /^[a-z0-9]{24,}$/.test(model) ? model : DEFAULT_ASSISTANT_ID;
|
|
||||||
let chatMode = '';
|
|
||||||
|
|
||||||
if(model.indexOf('think') != -1 || model.indexOf('zero') != -1) {
|
|
||||||
chatMode = 'zero';
|
|
||||||
logger.info('使用【推理】模型');
|
|
||||||
}
|
|
||||||
|
|
||||||
if(model.indexOf('deepresearch') != -1) {
|
|
||||||
chatMode = 'deep_research';
|
|
||||||
logger.info('使用【沉思(DeepResearch)】模型');
|
|
||||||
}
|
|
||||||
|
|
||||||
// 请求流
|
// 请求流
|
||||||
const token = await acquireToken(refreshToken);
|
const token = await acquireToken(refreshToken);
|
||||||
const sign = await generateSign();
|
|
||||||
const result = await axios.post(
|
const result = await axios.post(
|
||||||
`https://chatglm.cn/chatglm/backend-api/assistant/stream`,
|
`https://chatglm.cn/chatglm/backend-api/assistant/stream`,
|
||||||
{
|
{
|
||||||
@ -376,14 +300,9 @@ async function createCompletionStream(
|
|||||||
messages: messagesPrepare(messages, refs, !!refConvId),
|
messages: messagesPrepare(messages, refs, !!refConvId),
|
||||||
meta_data: {
|
meta_data: {
|
||||||
channel: "",
|
channel: "",
|
||||||
chat_mode: chatMode || undefined,
|
|
||||||
draft_id: "",
|
draft_id: "",
|
||||||
if_plus_model: true,
|
|
||||||
input_question_type: "xxxx",
|
input_question_type: "xxxx",
|
||||||
is_networking: true,
|
|
||||||
is_test: false,
|
is_test: false,
|
||||||
platform: "pc",
|
|
||||||
quote_log_id: ""
|
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -395,9 +314,6 @@ async function createCompletionStream(
|
|||||||
: `https://chatglm.cn/main/gdetail/${assistantId}`,
|
: `https://chatglm.cn/main/gdetail/${assistantId}`,
|
||||||
"X-Device-Id": util.uuid(false),
|
"X-Device-Id": util.uuid(false),
|
||||||
"X-Request-Id": util.uuid(false),
|
"X-Request-Id": util.uuid(false),
|
||||||
"X-Sign": sign.sign,
|
|
||||||
"X-Timestamp": sign.timestamp,
|
|
||||||
"X-Nonce": sign.nonce,
|
|
||||||
...FAKE_HEADERS,
|
...FAKE_HEADERS,
|
||||||
},
|
},
|
||||||
// 120秒超时
|
// 120秒超时
|
||||||
@ -438,7 +354,7 @@ async function createCompletionStream(
|
|||||||
|
|
||||||
const streamStartTime = util.timestamp();
|
const streamStartTime = util.timestamp();
|
||||||
// 创建转换流将消息格式转换为gpt兼容格式
|
// 创建转换流将消息格式转换为gpt兼容格式
|
||||||
return createTransStream(model, result.data, (convId: string) => {
|
return createTransStream(result.data, (convId: string) => {
|
||||||
logger.success(
|
logger.success(
|
||||||
`Stream has completed transfer ${util.timestamp() - streamStartTime}ms`
|
`Stream has completed transfer ${util.timestamp() - streamStartTime}ms`
|
||||||
);
|
);
|
||||||
@ -456,7 +372,7 @@ async function createCompletionStream(
|
|||||||
return createCompletionStream(
|
return createCompletionStream(
|
||||||
messages,
|
messages,
|
||||||
refreshToken,
|
refreshToken,
|
||||||
model,
|
assistantId,
|
||||||
refConvId,
|
refConvId,
|
||||||
retryCount + 1
|
retryCount + 1
|
||||||
);
|
);
|
||||||
@ -482,7 +398,6 @@ async function generateImages(
|
|||||||
];
|
];
|
||||||
// 请求流
|
// 请求流
|
||||||
const token = await acquireToken(refreshToken);
|
const token = await acquireToken(refreshToken);
|
||||||
const sign = await generateSign();
|
|
||||||
const result = await axios.post(
|
const result = await axios.post(
|
||||||
"https://chatglm.cn/chatglm/backend-api/assistant/stream",
|
"https://chatglm.cn/chatglm/backend-api/assistant/stream",
|
||||||
{
|
{
|
||||||
@ -492,11 +407,8 @@ async function generateImages(
|
|||||||
meta_data: {
|
meta_data: {
|
||||||
channel: "",
|
channel: "",
|
||||||
draft_id: "",
|
draft_id: "",
|
||||||
if_plus_model: true,
|
|
||||||
input_question_type: "xxxx",
|
input_question_type: "xxxx",
|
||||||
is_test: false,
|
is_test: false,
|
||||||
platform: "pc",
|
|
||||||
quote_log_id: ""
|
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -505,9 +417,6 @@ async function generateImages(
|
|||||||
Referer: `https://chatglm.cn/main/gdetail/${model}`,
|
Referer: `https://chatglm.cn/main/gdetail/${model}`,
|
||||||
"X-Device-Id": util.uuid(false),
|
"X-Device-Id": util.uuid(false),
|
||||||
"X-Request-Id": util.uuid(false),
|
"X-Request-Id": util.uuid(false),
|
||||||
"X-Sign": sign.sign,
|
|
||||||
"X-Timestamp": sign.timestamp,
|
|
||||||
"X-Nonce": sign.nonce,
|
|
||||||
...FAKE_HEADERS,
|
...FAKE_HEADERS,
|
||||||
},
|
},
|
||||||
// 120秒超时
|
// 120秒超时
|
||||||
@ -588,7 +497,6 @@ async function generateVideos(
|
|||||||
|
|
||||||
// 发起生成请求
|
// 发起生成请求
|
||||||
let token = await acquireToken(refreshToken);
|
let token = await acquireToken(refreshToken);
|
||||||
const sign = await generateSign();
|
|
||||||
const result = await axios.post(
|
const result = await axios.post(
|
||||||
`https://chatglm.cn/chatglm/video-api/v1/chat`,
|
`https://chatglm.cn/chatglm/video-api/v1/chat`,
|
||||||
{
|
{
|
||||||
@ -607,9 +515,6 @@ async function generateVideos(
|
|||||||
Referer: "https://chatglm.cn/video",
|
Referer: "https://chatglm.cn/video",
|
||||||
"X-Device-Id": util.uuid(false),
|
"X-Device-Id": util.uuid(false),
|
||||||
"X-Request-Id": util.uuid(false),
|
"X-Request-Id": util.uuid(false),
|
||||||
"X-Sign": sign.sign,
|
|
||||||
"X-Timestamp": sign.timestamp,
|
|
||||||
"X-Nonce": sign.nonce,
|
|
||||||
...FAKE_HEADERS,
|
...FAKE_HEADERS,
|
||||||
},
|
},
|
||||||
// 30秒超时
|
// 30秒超时
|
||||||
@ -627,7 +532,6 @@ async function generateVideos(
|
|||||||
if (util.unixTimestamp() - startTime > 600)
|
if (util.unixTimestamp() - startTime > 600)
|
||||||
throw new APIException(EX.API_VIDEO_GENERATION_FAILED);
|
throw new APIException(EX.API_VIDEO_GENERATION_FAILED);
|
||||||
const token = await acquireToken(refreshToken);
|
const token = await acquireToken(refreshToken);
|
||||||
const sign = await generateSign();
|
|
||||||
const result = await axios.get(
|
const result = await axios.get(
|
||||||
`https://chatglm.cn/chatglm/video-api/v1/chat/status/${chatId}`,
|
`https://chatglm.cn/chatglm/video-api/v1/chat/status/${chatId}`,
|
||||||
{
|
{
|
||||||
@ -636,9 +540,6 @@ async function generateVideos(
|
|||||||
Referer: "https://chatglm.cn/video",
|
Referer: "https://chatglm.cn/video",
|
||||||
"X-Device-Id": util.uuid(false),
|
"X-Device-Id": util.uuid(false),
|
||||||
"X-Request-Id": util.uuid(false),
|
"X-Request-Id": util.uuid(false),
|
||||||
"X-Sign": sign.sign,
|
|
||||||
"X-Timestamp": sign.timestamp,
|
|
||||||
"X-Nonce": sign.nonce,
|
|
||||||
...FAKE_HEADERS,
|
...FAKE_HEADERS,
|
||||||
},
|
},
|
||||||
// 30秒超时
|
// 30秒超时
|
||||||
@ -663,7 +564,6 @@ async function generateVideos(
|
|||||||
if (options.audioId) {
|
if (options.audioId) {
|
||||||
const [key, id] = options.audioId.split("-");
|
const [key, id] = options.audioId.split("-");
|
||||||
const token = await acquireToken(refreshToken);
|
const token = await acquireToken(refreshToken);
|
||||||
const sign = await generateSign();
|
|
||||||
const result = await axios.post(
|
const result = await axios.post(
|
||||||
`https://chatglm.cn/chatglm/video-api/v1/static/composite_video`,
|
`https://chatglm.cn/chatglm/video-api/v1/static/composite_video`,
|
||||||
{
|
{
|
||||||
@ -677,9 +577,6 @@ async function generateVideos(
|
|||||||
Referer: "https://chatglm.cn/video",
|
Referer: "https://chatglm.cn/video",
|
||||||
"X-Device-Id": util.uuid(false),
|
"X-Device-Id": util.uuid(false),
|
||||||
"X-Request-Id": util.uuid(false),
|
"X-Request-Id": util.uuid(false),
|
||||||
"X-Sign": sign.sign,
|
|
||||||
"X-Timestamp": sign.timestamp,
|
|
||||||
"X-Nonce": sign.nonce,
|
|
||||||
...FAKE_HEADERS,
|
...FAKE_HEADERS,
|
||||||
},
|
},
|
||||||
// 30秒超时
|
// 30秒超时
|
||||||
@ -1001,24 +898,20 @@ function checkResult(result: AxiosResponse, refreshToken: string) {
|
|||||||
if (!_.isFinite(code) && !_.isFinite(status)) return result.data;
|
if (!_.isFinite(code) && !_.isFinite(status)) return result.data;
|
||||||
if (code === 0 || status === 0) return result.data;
|
if (code === 0 || status === 0) return result.data;
|
||||||
if (code == 401) accessTokenMap.delete(refreshToken);
|
if (code == 401) accessTokenMap.delete(refreshToken);
|
||||||
if (message.includes('40102')) {
|
|
||||||
throw new APIException(EX.API_REQUEST_FAILED, `[请求glm失败]: 您的refresh_token已过期,请重新登录获取`);
|
|
||||||
}
|
|
||||||
throw new APIException(EX.API_REQUEST_FAILED, `[请求glm失败]: ${message}`);
|
throw new APIException(EX.API_REQUEST_FAILED, `[请求glm失败]: ${message}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* 从流接收完整的消息内容
|
* 从流接收完整的消息内容
|
||||||
*
|
*
|
||||||
* @param model 模型
|
|
||||||
* @param stream 消息流
|
* @param stream 消息流
|
||||||
*/
|
*/
|
||||||
async function receiveStream(model: string, stream: any): Promise<any> {
|
async function receiveStream(stream: any): Promise<any> {
|
||||||
return new Promise((resolve, reject) => {
|
return new Promise((resolve, reject) => {
|
||||||
// 消息初始化
|
// 消息初始化
|
||||||
const data = {
|
const data = {
|
||||||
id: "",
|
id: "",
|
||||||
model,
|
model: MODEL_NAME,
|
||||||
object: "chat.completion",
|
object: "chat.completion",
|
||||||
choices: [
|
choices: [
|
||||||
{
|
{
|
||||||
@ -1030,10 +923,6 @@ async function receiveStream(model: string, stream: any): Promise<any> {
|
|||||||
usage: { prompt_tokens: 1, completion_tokens: 1, total_tokens: 2 },
|
usage: { prompt_tokens: 1, completion_tokens: 1, total_tokens: 2 },
|
||||||
created: util.unixTimestamp(),
|
created: util.unixTimestamp(),
|
||||||
};
|
};
|
||||||
const isSilentModel = model.indexOf('silent') != -1;
|
|
||||||
const isThinkModel = model.indexOf('think') != -1 || model.indexOf('zero') != -1;
|
|
||||||
let thinkingText = "";
|
|
||||||
let thinking = false;
|
|
||||||
let toolCall = false;
|
let toolCall = false;
|
||||||
let codeGenerating = false;
|
let codeGenerating = false;
|
||||||
let textChunkLength = 0;
|
let textChunkLength = 0;
|
||||||
@ -1041,7 +930,6 @@ async function receiveStream(model: string, stream: any): Promise<any> {
|
|||||||
let lastExecutionOutput = "";
|
let lastExecutionOutput = "";
|
||||||
let textOffset = 0;
|
let textOffset = 0;
|
||||||
let refContent = "";
|
let refContent = "";
|
||||||
logger.info(`是否静默模型: ${isSilentModel}`);
|
|
||||||
const parser = createParser((event) => {
|
const parser = createParser((event) => {
|
||||||
try {
|
try {
|
||||||
if (event.type !== "event") return;
|
if (event.type !== "event") return;
|
||||||
@ -1060,7 +948,6 @@ async function receiveStream(model: string, stream: any): Promise<any> {
|
|||||||
status: partStatus,
|
status: partStatus,
|
||||||
type,
|
type,
|
||||||
text,
|
text,
|
||||||
think,
|
|
||||||
image,
|
image,
|
||||||
code,
|
code,
|
||||||
content,
|
content,
|
||||||
@ -1070,7 +957,6 @@ async function receiveStream(model: string, stream: any): Promise<any> {
|
|||||||
textChunkLength = 0;
|
textChunkLength = 0;
|
||||||
innerStr += "\n";
|
innerStr += "\n";
|
||||||
}
|
}
|
||||||
|
|
||||||
if (type == "text") {
|
if (type == "text") {
|
||||||
if (toolCall) {
|
if (toolCall) {
|
||||||
innerStr += "\n";
|
innerStr += "\n";
|
||||||
@ -1079,29 +965,11 @@ async function receiveStream(model: string, stream: any): Promise<any> {
|
|||||||
}
|
}
|
||||||
if (partStatus == "finish") textChunkLength = text.length;
|
if (partStatus == "finish") textChunkLength = text.length;
|
||||||
return innerStr + text;
|
return innerStr + text;
|
||||||
} else if (type == "think" && isThinkModel && !isSilentModel) {
|
|
||||||
if (toolCall) {
|
|
||||||
innerStr += "\n";
|
|
||||||
textOffset++;
|
|
||||||
toolCall = false;
|
|
||||||
}
|
|
||||||
if (partStatus == "finish") textChunkLength = think.length;
|
|
||||||
thinkingText += think.substring(thinkingText.length, think.length);
|
|
||||||
return innerStr;
|
|
||||||
} else if (type == "think" && !isSilentModel) {
|
|
||||||
if (toolCall) {
|
|
||||||
innerStr += "\n";
|
|
||||||
textOffset++;
|
|
||||||
toolCall = false;
|
|
||||||
}
|
|
||||||
thinkingText += text;
|
|
||||||
return innerStr;
|
|
||||||
} else if (
|
} else if (
|
||||||
type == "quote_result" &&
|
type == "quote_result" &&
|
||||||
status == "finish" &&
|
status == "finish" &&
|
||||||
meta_data &&
|
meta_data &&
|
||||||
_.isArray(meta_data.metadata_list) &&
|
_.isArray(meta_data.metadata_list)
|
||||||
!isSilentModel
|
|
||||||
) {
|
) {
|
||||||
refContent = meta_data.metadata_list.reduce((meta, v) => {
|
refContent = meta_data.metadata_list.reduce((meta, v) => {
|
||||||
return meta + `${v.title} - ${v.url}\n`;
|
return meta + `${v.title} - ${v.url}\n`;
|
||||||
@ -1164,8 +1032,6 @@ async function receiveStream(model: string, stream: any): Promise<any> {
|
|||||||
);
|
);
|
||||||
data.choices[0].message.content += chunk;
|
data.choices[0].message.content += chunk;
|
||||||
} else {
|
} else {
|
||||||
if(thinkingText)
|
|
||||||
data.choices[0].message.content = `<think>\n${thinkingText}</think>\n\n${data.choices[0].message.content}`;
|
|
||||||
data.choices[0].message.content =
|
data.choices[0].message.content =
|
||||||
data.choices[0].message.content.replace(
|
data.choices[0].message.content.replace(
|
||||||
/【\d+†(来源|源|source)】/g,
|
/【\d+†(来源|源|source)】/g,
|
||||||
@ -1193,23 +1059,18 @@ async function receiveStream(model: string, stream: any): Promise<any> {
|
|||||||
*
|
*
|
||||||
* 将流格式转换为gpt兼容流格式
|
* 将流格式转换为gpt兼容流格式
|
||||||
*
|
*
|
||||||
* @param model 模型
|
|
||||||
* @param stream 消息流
|
* @param stream 消息流
|
||||||
* @param endCallback 传输结束回调
|
* @param endCallback 传输结束回调
|
||||||
*/
|
*/
|
||||||
function createTransStream(model: string, stream: any, endCallback?: Function) {
|
function createTransStream(stream: any, endCallback?: Function) {
|
||||||
// 消息创建时间
|
// 消息创建时间
|
||||||
const created = util.unixTimestamp();
|
const created = util.unixTimestamp();
|
||||||
// 创建转换流
|
// 创建转换流
|
||||||
const transStream = new PassThrough();
|
const transStream = new PassThrough();
|
||||||
const isSilentModel = model.indexOf('silent') != -1;
|
|
||||||
const isThinkModel = model.indexOf('think') != -1 || model.indexOf('zero') != -1;
|
|
||||||
let content = "";
|
let content = "";
|
||||||
let thinking = false;
|
|
||||||
let toolCall = false;
|
let toolCall = false;
|
||||||
let codeGenerating = false;
|
let codeGenerating = false;
|
||||||
let textChunkLength = 0;
|
let textChunkLength = 0;
|
||||||
let thinkingText = "";
|
|
||||||
let codeTemp = "";
|
let codeTemp = "";
|
||||||
let lastExecutionOutput = "";
|
let lastExecutionOutput = "";
|
||||||
let textOffset = 0;
|
let textOffset = 0;
|
||||||
@ -1217,7 +1078,7 @@ function createTransStream(model: string, stream: any, endCallback?: Function) {
|
|||||||
transStream.write(
|
transStream.write(
|
||||||
`data: ${JSON.stringify({
|
`data: ${JSON.stringify({
|
||||||
id: "",
|
id: "",
|
||||||
model,
|
model: MODEL_NAME,
|
||||||
object: "chat.completion.chunk",
|
object: "chat.completion.chunk",
|
||||||
choices: [
|
choices: [
|
||||||
{
|
{
|
||||||
@ -1245,7 +1106,6 @@ function createTransStream(model: string, stream: any, endCallback?: Function) {
|
|||||||
status: partStatus,
|
status: partStatus,
|
||||||
type,
|
type,
|
||||||
text,
|
text,
|
||||||
think,
|
|
||||||
image,
|
image,
|
||||||
code,
|
code,
|
||||||
content,
|
content,
|
||||||
@ -1256,11 +1116,6 @@ function createTransStream(model: string, stream: any, endCallback?: Function) {
|
|||||||
innerStr += "\n";
|
innerStr += "\n";
|
||||||
}
|
}
|
||||||
if (type == "text") {
|
if (type == "text") {
|
||||||
if(thinking) {
|
|
||||||
innerStr += "</think>\n\n"
|
|
||||||
textOffset += thinkingText.length + 8;
|
|
||||||
thinking = false;
|
|
||||||
}
|
|
||||||
if (toolCall) {
|
if (toolCall) {
|
||||||
innerStr += "\n";
|
innerStr += "\n";
|
||||||
textOffset++;
|
textOffset++;
|
||||||
@ -1268,41 +1123,17 @@ function createTransStream(model: string, stream: any, endCallback?: Function) {
|
|||||||
}
|
}
|
||||||
if (partStatus == "finish") textChunkLength = text.length;
|
if (partStatus == "finish") textChunkLength = text.length;
|
||||||
return innerStr + text;
|
return innerStr + text;
|
||||||
} else if (type == "think" && isThinkModel && !isSilentModel) {
|
|
||||||
if(!thinking) {
|
|
||||||
innerStr += "<think>\n";
|
|
||||||
textOffset += 7;
|
|
||||||
thinking = true;
|
|
||||||
}
|
|
||||||
if (toolCall) {
|
|
||||||
innerStr += "\n";
|
|
||||||
textOffset++;
|
|
||||||
toolCall = false;
|
|
||||||
}
|
|
||||||
if (partStatus == "finish") textChunkLength = think.length;
|
|
||||||
thinkingText += think.substring(thinkingText.length, think.length);
|
|
||||||
return innerStr + thinkingText;
|
|
||||||
} else if (type == "think" && !isSilentModel) {
|
|
||||||
if (toolCall) {
|
|
||||||
innerStr += "\n";
|
|
||||||
textOffset++;
|
|
||||||
toolCall = false;
|
|
||||||
}
|
|
||||||
if (partStatus == "finish") textChunkLength = thinkingText.length;
|
|
||||||
thinkingText += think;
|
|
||||||
return innerStr + thinkingText;
|
|
||||||
} else if (
|
} else if (
|
||||||
type == "quote_result" &&
|
type == "quote_result" &&
|
||||||
status == "finish" &&
|
status == "finish" &&
|
||||||
meta_data &&
|
meta_data &&
|
||||||
_.isArray(meta_data.metadata_list) &&
|
_.isArray(meta_data.metadata_list)
|
||||||
!isSilentModel
|
|
||||||
) {
|
) {
|
||||||
const searchText =
|
const searchText =
|
||||||
meta_data.metadata_list.reduce(
|
meta_data.metadata_list.reduce(
|
||||||
(meta, v) => meta + `检索 ${v.title}(${v.url}) ...\n`,
|
(meta, v) => meta + `检索 ${v.title}(${v.url}) ...`,
|
||||||
""
|
""
|
||||||
);
|
) + "\n";
|
||||||
textOffset += searchText.length;
|
textOffset += searchText.length;
|
||||||
toolCall = true;
|
toolCall = true;
|
||||||
return innerStr + searchText;
|
return innerStr + searchText;
|
||||||
@ -1492,23 +1323,41 @@ function tokenSplit(authorization: string) {
|
|||||||
return authorization.replace("Bearer ", "").split(",");
|
return authorization.replace("Bearer ", "").split(",");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* 备用生成cookie
|
||||||
|
*
|
||||||
|
* 暂时还不需要
|
||||||
|
*
|
||||||
|
* @param refreshToken
|
||||||
|
* @param token
|
||||||
|
*/
|
||||||
|
function generateCookie(refreshToken: string, token: string) {
|
||||||
|
const timestamp = util.unixTimestamp();
|
||||||
|
const gsTimestamp = timestamp - Math.round(Math.random() * 2592000);
|
||||||
|
return {
|
||||||
|
chatglm_refresh_token: refreshToken,
|
||||||
|
// chatglm_user_id: '',
|
||||||
|
_ga_PMD05MS2V9: `GS1.1.${gsTimestamp}.18.0.${gsTimestamp}.0.0.0`,
|
||||||
|
chatglm_token: token,
|
||||||
|
chatglm_token_expires: util.getDateString("yyyy-MM-dd HH:mm:ss"),
|
||||||
|
abtestid: "a",
|
||||||
|
// acw_tc: ''
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* 获取Token存活状态
|
* 获取Token存活状态
|
||||||
*/
|
*/
|
||||||
async function getTokenLiveStatus(refreshToken: string) {
|
async function getTokenLiveStatus(refreshToken: string) {
|
||||||
const sign = await generateSign();
|
|
||||||
const result = await axios.post(
|
const result = await axios.post(
|
||||||
"https://chatglm.cn/chatglm/user-api/user/refresh",
|
"https://chatglm.cn/chatglm/backend-api/v1/user/refresh",
|
||||||
undefined,
|
{},
|
||||||
{
|
{
|
||||||
headers: {
|
headers: {
|
||||||
Authorization: `Bearer ${refreshToken}`,
|
Authorization: `Bearer ${refreshToken}`,
|
||||||
Referer: "https://chatglm.cn/main/alltoolsdetail",
|
Referer: "https://chatglm.cn/main/alltoolsdetail",
|
||||||
"X-Device-Id": util.uuid(false),
|
"X-Device-Id": util.uuid(false),
|
||||||
"X-Request-Id": util.uuid(false),
|
"X-Request-Id": util.uuid(false),
|
||||||
"X-Sign": sign.sign,
|
|
||||||
"X-Timestamp": sign.timestamp,
|
|
||||||
"X-Nonce": sign.nonce,
|
|
||||||
...FAKE_HEADERS,
|
...FAKE_HEADERS,
|
||||||
},
|
},
|
||||||
timeout: 15000,
|
timeout: 15000,
|
||||||
|
@ -3,6 +3,7 @@ import _ from 'lodash';
|
|||||||
import Request from '@/lib/request/Request.ts';
|
import Request from '@/lib/request/Request.ts';
|
||||||
import Response from '@/lib/response/Response.ts';
|
import Response from '@/lib/response/Response.ts';
|
||||||
import chat from '@/api/controllers/chat.ts';
|
import chat from '@/api/controllers/chat.ts';
|
||||||
|
import logger from '@/lib/logger.ts';
|
||||||
|
|
||||||
export default {
|
export default {
|
||||||
|
|
||||||
@ -20,15 +21,15 @@ export default {
|
|||||||
// 随机挑选一个refresh_token
|
// 随机挑选一个refresh_token
|
||||||
const token = _.sample(tokens);
|
const token = _.sample(tokens);
|
||||||
const { model, conversation_id: convId, messages, stream } = request.body;
|
const { model, conversation_id: convId, messages, stream } = request.body;
|
||||||
|
const assistantId = /^[a-z0-9]{24,}$/.test(model) ? model : undefined
|
||||||
if (stream) {
|
if (stream) {
|
||||||
const stream = await chat.createCompletionStream(messages, token, model, convId);
|
const stream = await chat.createCompletionStream(messages, token, assistantId, convId);
|
||||||
return new Response(stream, {
|
return new Response(stream, {
|
||||||
type: "text/event-stream"
|
type: "text/event-stream"
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
return await chat.createCompletion(messages, token, model, convId);
|
return await chat.createCompletion(messages, token, assistantId, convId);
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -18,11 +18,6 @@ export default {
|
|||||||
"object": "model",
|
"object": "model",
|
||||||
"owned_by": "glm-free-api"
|
"owned_by": "glm-free-api"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"id": "glm-4-plus",
|
|
||||||
"object": "model",
|
|
||||||
"owned_by": "glm-free-api"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"id": "glm-4v",
|
"id": "glm-4v",
|
||||||
"object": "model",
|
"object": "model",
|
||||||
|
Loading…
x
Reference in New Issue
Block a user