pipeline
Here are 1,918 public repositories matching this topic...
Expected Behavior
When defining parameters for a Task in my Pipeline, I would like to use values for the parameters which I defined in a ConfigMap, instead of hardcoding them in the Pipeline.
A cool solution would look like the use of ConfigMaps when defining environment variables in a Pod, there I am able to do this:
env:
- name: SOME_ENV_VARIABLE
Azure Event Hubs is like a managed Kafka service:
Allow existing Apache Kafka clients and applications to talk to Event Hubs without any code changes—you get a managed Kafka experience without having to manage your own clusters.
This suggests that we could wrap the existing kafka sink. I'd prefer that we wrap it since it is techni
Summary
I’m trying to use a Chart Repository by external Nexus, but I’d a problem in pipeline when I’m using the command:
jx step helm release
When this command do the upload chart file, my $CHART_REPOSITORY is changing and including the /api/charts
+ jx step helm release --verbose
DEBUG: Using helmBinary helm with feature flag: template-mode
DEBUG: Initialising Helm 'i
Hi all!,
Just want to share with the team some details I've been experiencing while I executed notebooks from command line using a yaml file.
First, let me show my case. I've bee parametrizing different notebooks to isolate data wranlging processes. To do it, I needed to use lists of dictionaries to specifiy keys describing my data, such as area or paths where some files were stored. As all
Summary
Add field/label "LIVE For:" to show how much time the Revision was LIVE. Currently when a user clicks on the "History and Rollback", it shows all past Revisions and when it was triggered and completed and also displays the amount of time for which that Revision was LIVE. But it does not mention what these timings/durations are for (as shown in the attached screenshot where it shows dur
Description & context
Users can specify names for their nodes to identify them more easily. When a name is not explicitly specified, Kedro auto-generates a default name. You can see this in the name property on Node.
The current auto-generated name for a node looks something like this: func_name(inputs) -> outputs. (see implementation of __str__ method on the Node class)
This is
-
Updated
Feb 17, 2020
As a user,
It would be nice to have the "Observed Value" Field be standardized to show percentages of "successful" validations, vs a mix of 0% / 100%. This causes confusion as there are different levels of validation outputs with different verbage (making someone not used to the expectations confused) I've given an example below in a screenshot for what I mean:
.
To Reproduce
Steps to reproduce the behavior:
- Go to 'https://cloud.lastbackend.com/templates#public'
- See error
Expected behavior
I was expecting a lot more, e.g. Wordpress, MongoDB, Bootstrap, PostgreSQL, Etherpad, Apache, CouchDB, SearX, wall
-
Updated
Jun 8, 2020 - R
New feature
Ability to specify the Compute Engine disk type (pd-standard or local-SSD) found in the new Cloud Life Sciences API (https://cloud.google.com/life-sciences/docs/reference/rpc/google.cloud.lifesciences.v2beta#disk).
Usage scenario
Job's that require a high input/output operations per second and lower latency (https://cloud.google.com/compute/docs/disks/local-ssd).
S
-
Updated
Jun 11, 2020 - Python
stelligent / mu
While testing another PR, I found that mu pipeline logs command displays information from the pipelines, but also shows this error:
$ mu pipeline logs
[... normal, expected output ...]
func1 ▶ ERROR ResourceNotFoundException: The specified log group does not exist.
status code: 400, request id: f7260741-7f69-4772-b4cc-7c6a9c22d264This error does not occur with the `-f
Host docs in datakit
Currently with host the API docs on our local server, and everything is very manual. Would be nice if we can integrate this in the rest of the CI flow (e.g. re-publish automatically on every PR, in a location available to all).
Note: usually Github pages are great for that, but they don't work well for private repos (ie they are public).
-
Updated
Jun 1, 2020 - Jupyter Notebook
Type assertion allows us to override TypeSciprt's inference, and we should avoid using them to solve type conflicts manually, which means we should be careful with the use of assertions
Refer to:
- type-assertion - TypeScript Deep Dive
- [basic-types - TypeScript Handbook](https://www.typescriptl
The documentation generator at ./docs/generator/ could support the config struct tags introduced in trivago/gollum#143 to:
- Detect undocumented configuration parameters and include them in the documentation
- Detetect and warn about inexistent / mistyped configuration parameters found in the doc comments
Not a very urgent need, but perhaps useful when revising do
-
Updated
Jun 12, 2020 - Python
Add a new method:
proc.SetPathAuditHashed(portName string, prefix string, suffix string)... which will create a SHA1(?) hash the port name plus all the command, parameters, and upstream commands and parameters used to create the specific file.
Usage example:
proc.SetPathAuditHashed("csvfile", "dat/", ".csv")Internally, it should use a function, that can als
From alan-turing-institute/MLJBase.jl#68:
This doesn't work:
@mlj_model mutable struct Bar
a::Int = -1::(_ > -2)
endBut this does:
@mlj_model mutable struct Bar
a::Int = (-)(1)::(_ > -2)
endThis needs to be documented in MLJ/docs/src/adding_models_for_general_use and MLJ/docs/src/quick_start_guide_to_adding_models
It's really difficult to read a command like docker run --entrypoint=mesos-master --net=host -d --name=leader --volume=/home/jobStoreParentDir:/jobStoreParentDir quay.io/ucsc_cgl/toil:3.6.0 --registry=in_memory --ip=127.0.0.1 --port=5050 --allocation_interval=500ms on one line.
Would it be possible to split it with \ so that Sphinx shows it all at once? E.g.
docker run \
--entrypoint
-
Updated
Apr 25, 2019 - Python
Improve this page
Add a description, image, and links to the pipeline topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pipeline topic, visit your repo's landing page and select "manage topics."


Description
The bower_components folder is not compiled when angular-i18n dependency is included.
Expected behavior
Brunch should process the bower_components folder and perform the related tasks.
Actual behavior
When I add the angular-i18n package in my bower.json, Brunch ignores the bower_components folder and the vendor files (js and css) are n