pyspark
Here are 1,138 public repositories matching this topic...
-
Updated
Nov 14, 2020 - Scala
-
Updated
Nov 10, 2020 - Scala
-
Updated
Sep 30, 2020 - Jupyter Notebook
-
Updated
Nov 11, 2020
Hello everyone,
Recently I tried to set up petastorm on my company's hadoop cluster.
However as the cluster uses Kerberos for authentication using petastorm failed.
I figured out that petastorm relies on pyarrow which actually supports kerberos authentication.
I hacked "petastorm/petastorm/hdfs/namenode.py" line 250
and replaced it with
driver = 'libhdfs'
return pyarrow.hdfs.c-
Updated
Nov 10, 2020 - Jupyter Notebook
-
Updated
Apr 3, 2020 - Vue
if they are not class methods then the method would be invoked for every test and a session would be created for each of those tests.
`class PySparkTest(unittest.TestCase):
@classmethod
def suppress_py4j_logging(cls):
logger = logging.getLogger('py4j')
logger.setLevel(logging.WARN)
@classmethod
def create_testing_pyspark_session(cls):
return Sp
-
Updated
Jun 2, 2019 - Jupyter Notebook
-
Updated
Nov 2, 2020 - Python
-
Updated
Jul 1, 2020 - Python
-
Updated
Jun 6, 2017
-
Updated
Nov 6, 2020 - Python
-
Updated
Oct 2, 2019 - Python
These files belong to the Gimel Discovery Service, which is still Work-In-Progress in PayPal & not yet open sourced. In addition, the logic in these files are outdated & hence it does not make sense to have these files in the repo.
https://github.com/paypal/gimel/search?l=Shell
Remove --> gimel-dataapi/gimel-core/src/main/scripts/tools/bin/hbase/hbase_ddl_creator.sh
-
Updated
Nov 6, 2020 - Scala
-
Updated
Oct 13, 2020 - Scala
-
Updated
May 19, 2019 - Jupyter Notebook
-
Updated
Sep 29, 2020 - Python
-
Updated
Jul 7, 2020 - Jupyter Notebook
-
Updated
Sep 3, 2020 - Jupyter Notebook
-
Updated
Nov 12, 2020 - HTML
-
Updated
Nov 14, 2020 - Scala
-
Updated
Feb 18, 2017 - Python
Improve this page
Add a description, image, and links to the pyspark topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pyspark topic, visit your repo's landing page and select "manage topics."


I have a simple regression task (using a LightGBMRegressor) where I want to penalize negative predictions more than positive ones. Is there a way to achieve this with the default regression LightGBM objectives (see https://lightgbm.readthedocs.io/en/latest/Parameters.html)? If not, is it somehow possible to define (many example for default LightGBM model) and pass a custom regression objective?