Skip to content

Apache HugeGraph-AI 1.7.0 (Release)

Latest

Choose a tag to compare

@Pengzna Pengzna released this 16 Nov 15:29
101f10f

What's Changed

  • fix(llm): limit the length of log & improve the format by @diya-he in #121
  • fix: pylint in ml by @MrJs133 in #125
  • fix(lint): critical bug with pylint usage by @MrJs133 in #131
  • feat(llm): added the process of text2gql in graphrag V1.0 by @vichayturen in #105
  • fix(llm): multi vid k-neighbor query only return the data of first vid by @yc319 in #132
  • refactor(llm): use pydantic-settings for config management by @ChenZiHong-Gavin in #122
  • fix(llm): replace getenv usage to settings by @ChenZiHong-Gavin in #133
  • fix(llm): support choose template in api by @HJ-Young in #135
  • fix(ml): Correct header writing errors by @MrJs133 in #140
  • fix(llm): update prompt to fit prefix cache by @HJ-Young in #137
  • chore: enable pip cache by @imbajin in #142
  • feat(llm): timely execute vid embedding & enhance some HTTP logic by @MrJs133 in #141
  • fix(llm): extract_graph_data use wrong method by @MrJs133 in #145
  • feat(llm): use retry from tenacity by @ChenZiHong-Gavin in #143
  • refactor(llm): remove enable_gql logic in api & rag block by @HJ-Young in #148
  • doc(client): update README for python-client/SDK by @HJ-Young in #150
  • feat(llm): modify the summary info and enhance the request logic by @MrJs133 in #147
  • feat(llm): automatic backup graph data timely by @MrJs133 in #151
  • refactor(llm): add a button to backup data & count together by @MrJs133 in #153
  • fix(llm): use empty str for llm config by @ChenZiHong-Gavin in #155
  • refactor(llm): extract topk_per_keyword & topk_return_results to .env by @MrJs133 in #154
  • feat(llm): modify clear buttons by @MrJs133 in #156
  • feat(llm): support intent recognition V1 by @HJ-Young in #159
  • refactor(llm): change vid embedding x:yy to yy & use multi-thread by @MrJs133 in #158
  • refactor(llm): support mathjax in rag query block V1 by @MrJs133 in #157
  • feat(llm): use poetry to manage the dependencies by @returnToInnocence in #149
  • feat(llm): add post method for paths-api by @MrJs133 in #162
  • refactor(llm): return schema.groovy first when backup graph data by @MrJs133 in #161
  • fix(llm): update gremlin generate prompt to apply fuzzy match by @HJ-Young in #163
  • fix(llm): enable fastapi auto reload function by @Aryankb in #164
  • fix(llm): merge all logs into one file by @Aryankb in #171
  • refactor: use uv for the CI action by @imbajin in #175
  • refactor(llm): use EN prompt for keywords extraction by @imbajin in #174
  • chore: add collaborators in asf config by @imbajin in #182
  • feat(llm): support litellm LLM provider by @coderzc in #178
  • feat(llm): support switch graph in api & add some query configs by @afterimagex in #184
  • refactor(llm): improve graph extraction default prompt by @Kryst4lDem0ni4s in #187
  • refactor(llm): replace vid by full vertexes info by @imbajin in #189
  • feat(llm): support asynchronous streaming generation in rag block by using async_generator and asyncio.wait. by @vichayturen in #190
  • feat(llm): Generalize the regex extraction func by @HJ-Young in #194
  • feat(llm): create quick_start.md by @MrJs133 in #196
  • feat(llm): support Docker & K8s deployment way by @afterimagex in #195
  • chore(llm): multi-stage building in Dockerfile by @imbajin in #199
  • chore: enable discussion & change merge way by @imbajin in #201
  • fix(llm): fix tiny bugs & optimize reranker layout by @HJ-Young in #202
  • fix(llm): enable tasks concurrency configs in Gradio by @Kryst4lDem0ni4s in #188
  • feat(llm):support graph checking before updating vid embedding by @MrJs133 in #205
  • fix(llm): Align regex extraction of json to json format of prompt by @Thespica in #211
  • fix(clien): fix documentation sample code error by @Thespica in #219
  • feat(llm):disable text2gql by default by @MrJs133 in #216
  • chore(llm): use 4.1-mini and 0.01 temperature by default by @MrJs133 in #214
  • refactor(llm): enhance the multi configs for LLM by @cgwer in #212
  • feat(llm): Textbox to Code #217 by @mikumifa in #223
  • refactor: replace the IP + Port with URL by @mingfang in #209
  • chore(dependency): update gradio's version by @weijinglin in #235
  • fix(llm): failed to remove vectors when updating vid embedding by @MrJs133 in #243
  • chore(llm): use asyncio to get embeddings by @Mushrimpy in #215
  • refactor(llm): change QPS -> RPM for timer decorator by @Ethereal-O in #241
  • refactor(llm): support batch embedding by @weijinglin in #238
  • chore(llm): using nuitka to provide a binary/perf way for the service by @weijinglin in #242
  • chore(hg-llm): use uv instead poetry by @cgwer in #226
  • fix(llm): skip empty chunk in LLM steaming mode by @MrJs133 in #245
  • fix(llm): ollama batch embedding bug by @MrJs133 in #250
  • refactor(llm): basic compatible in text2gremlin generation by @MrJs133 in #261
  • fix(config): enhance config path handling and add project root validation by @weijinglin in #262
  • Fix: fix Dockerfile to add pyproject.toml anchor file by @weijinglin in #266
  • feat(vermeer): add vermeer python client for graph computing by @MrJs133 in #263
  • refactor: use uv in client & ml modules & adapter the CI by @cgwer in #257
  • refactor(vermeer): use uv to manage pkgs & update README by @imbajin in #272
  • docs: synchronization with official documentation by @weijinglin in #273
  • docs: fix grammar errors by @ChenZiHong-Gavin in #275
  • docs: improve README clarity and deployment instructions by @weijinglin in #276
  • fix(llm): limit the deps version to handle critical init problems by @imbajin in #279
  • chore: add docker-compose deployment and improve container networking instructions by @weijinglin in #280
  • feat(llm): support semi-automated prompt generation by @LRriver in #281
  • feat(llm): support semi-automated generated graph schema by @Creeprime in #274
  • docs: update docker compose command by @hezhangjian in #283
  • chore: unify all modules with uv by @imbajin in #287
  • fix(log):reduce third-party library log output(#244) by @ZequanLIU in #284
  • chore: add GitHub Actions for auto upstream sync and update SEALData subsample logic by @weijinglin in #289
  • chore(llm): add a basic LLM/AI coding instruction file by @imbajin in #290
  • feat(llm): add rules for AI coding guideline - V1.0 by @fantasy-lotus in #293
  • refactor(llm): replace QianFan by OpenAI-compatible format by @day0n in #285
  • docs: update README with improved setup instructions by @weijinglin in #294
  • perf(llm): optimize vector index with asyncio embedding by @weijinglin in #264
  • fix(llm): refactor embedding parallelization to preserve order by @imbajin in #295
  • feat(llm): Support switching prompt EN/CN. by @day0n in #269
  • feat(llm): support storing vector data for a graph instance by model type/name by @Creeprime in #265
  • feat(llm): text2gremlin api by @MrJs133 in #258
  • fix(llm): add missing 'properties' in gremlin prompt formatting by @Seanium in #298
  • feat(llm): add AGENTS.md as new document standard by @fantasy-lotus in #299
  • feat(llm): update keyword extraction method (BREAKING CHANGE) by @Gfreely in #282
  • [Refactor] Add Fixed Workflow Execution Engine: Flow, Node, and Scheduler Architecture by @weijinglin in #302
  • feat(llm): support vector db layer V1.0 by @fantasy-lotus in #304
  • chore: fixed cgraph version by @weijinglin in #305
  • fix(llm): Ollama embedding API usage and config param by @imbajin in #306

New Contributors

Full Changelog: 1.5.0...1.7.0