kafka
Here are 7,735 public repositories matching this topic...
-
Updated
Jun 13, 2022 - Java
-
Updated
Jul 18, 2022 - Java
-
Updated
Jul 18, 2022 - Java
-
Updated
Apr 29, 2022 - Scala
-
Updated
Jul 18, 2022 - Java
-
Updated
Jul 14, 2022 - Python
-
Updated
Jul 18, 2022 - C
-
Updated
Jul 12, 2022 - C#
-
Updated
Jul 18, 2022 - Python
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Currently, ksqlDB causes a full topic scan whenever performing a pull query over a stream. This is inefficient when looking up specific sets of data, but necessary due to how pull queries are implemented over streams.
**Describe the
-
Updated
Jul 14, 2022 - Go
-
Updated
Jul 18, 2022 - Go
For an implementation of #126 (PostgreSQL driver with SKIP LOCKED
), I create a SQL table for each consumer group containing the offsets ready to be consumed. The name for these tables is build by concatenating some prefix, the name of the topic and the name of the consumer group. In some of the test cases in the test suite, UUID are used for both, the topic and the consumer group. Each UUID has
Who is this for and what problem do they have today?
This is for users who make use of more than one consumer group. A problem that arises is that even with many nodes and few groups, the group coordinators may be chosen unevenly, e.g., with 8 groups and 32 nodes, it is possible (and in practice common enough) that one node is the coordinator for 2 or 3 groups. Ideally, no no node would coo
-
Updated
Jun 21, 2022 - Java
-
Updated
Jul 6, 2022 - Java
-
Updated
Jul 18, 2022 - Rust
-
Updated
Apr 24, 2020 - Jsonnet
-
Updated
Jun 19, 2022 - Java
-
Updated
Jul 18, 2022 - C
Improve this page
Add a description, image, and links to the kafka topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the kafka topic, visit your repo's landing page and select "manage topics."
I have noticed when ingesting backlog(older timestamped data) that the "Messages per minute" line graph and "sources" data do not line up.
The Messages per minute appear to be correct for the ingest rate, but the sources breakdown below it only show messages for each type from within the time window via timestamp. This means in the last hour if you've ingested logs from 2 days ago, the data is