asterixdb

Clone Tools
  • last updated a few minutes ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Changed create-polygon to accept a list of doubles.

a fix for asterix issue 633.

change spatial type constructors to return null if a null argument is passed in

partially fix issue 602

partial fix for issue 676

add import that got screwed up in the merge

Merge branch 'master' into zheilbron/hyracks_msr_demo

Merge branch 'master' into zheilbron/asterix_msr_demo

fix conflict in comment

Merge branch 'master' into yingyi/fullstack_fix

remove extra method

merge master to fix-mem branch

Merge branch 'master' into pouria/fix-memory

Conflicts:

hyracks/hyracks-api/src/main/java/edu/uci/ics/hyracks/api/context/IHyracksCommonContext.java

hyracks/hyracks-client/src/main/java/edu/uci/ics/hyracks/client/dataset/DatasetClientContext.java

hyracks/hyracks-storage-am-lsm-invertedindex/src/main/java/edu/uci/ics/hyracks/storage/am/lsm/invertedindex/ondisk/OnDiskInvertedIndex.java

hyracks/hyracks-test-support/src/main/java/edu/uci/ics/hyracks/test/support/TestTaskContext.java

Fixing Methods signature

re-enabled test

modified test case to use adaptor alias instead of class name

Merge branch 'master' into raman/master_feeds_adaptor_test

fix to support non-default pregelix cc http ports

fix parsing of actual 64-bit values

corrected the external adaptor test case to use the right library

compatibility for for bash versions <4.0

add (optional) CC_HTTPPORT and JOB_HISTORY_SIZE to conf

fix for linux setting

added test for installation and use of an external adaptor

fix client dyn-opt setting

1. make startcc/nc scripts flexible for different physical memory size; 2. add dynamic optimization option in the Client

support heterogenous cluster

  1. … 23 more files in changeset.
use Counters as partial value to simplify HadoopCountersAggregator

add new example for Counters usage

add support for Hadoop Counters via job.setCounterAggregatorClass

The PregelixJob.setCounterAggregatorClass sets up a (user-specified)

global aggregator and an iterationComplete hook to save Counter values.

The user-specified Counter-based aggregator (must extend

HadoopCountersAggregator) is saved to HDFS in each iteration and should

be restart/snapshot-aware.

The usage for setting up counters is to make a call to

job.setCounterAggregatorClass. After job completion, the Counters may

be retrieved from HDFS using BspUtils.getCounters(job).

Note that there is currently only one spot for iterationComplete hooks

and this behavior occupies it.