profilePlans {BARD} | R Documentation |
These functions creates sets of redistricting plans to show how redistricting criteria affect plan outcoumes
profilePlans( seedplans, score.fun, ngenplans = 0, gen.fun = "createRandomPlan", gen.args = list(), refine.fun = "refineAnnealPlan", refine.args = list(), addscore.fun = NULL, weight.fun = function(score1, score2, weight) { sum(score1 + weight * score2)}, weight = seq(0, 1, length.out = 10, ), numevals = 10, tracelevel = 1, usecluster=TRUE) samplePlans(seedplans, score.fun = calcContiguityScore, ngenplans = 24, gen.fun = "createRandomPlan", gen.args = list(), refine.fun = "refineAnnealPlan", refine.args = list(), tracelevel = 1,usecluster=TRUE)
seedplans |
initial plans to be used as seeds |
score.fun |
base score function |
ngenplans |
number of additional plans to generate |
gen.fun |
function for generating additional plans |
gen.args |
a list of additional arguments to gen.fun |
refine.fun |
function for plan refinement |
refine.args |
a list of additional arguments to planRefineFun |
addscore.fun |
additional score component |
weight.fun |
function to generate weighted score |
weight |
vector of weight |
numevals |
number of evaluations per plan at each point |
tracelevel |
indicates desired level of printed tracing of optimization, 0 = no printing, higher levels give more detail. Nine is maximum |
usecluster |
use the bard cluster for computations if available |
samplePlans
generates a set of plans, adds these to the seed plans given, and refines them based on the score function, to create a pseudo-sample of plans optimizing a particular score.profilePlans
generates a set of plans pseudo-sampled using a two-part score function, where the weight of the second part is repeatedly.
Returns a list of bard plans.
These functions can be very compute intensive. If a compute cluster is configured, these functions will automtically distribute the computing load across the cluster. See startBardCluster
.
Micah Altman Micah_Altman@harvard.edu http://www.hmdc.harvard.edu/micah_altman/
o Micah Altman, 1997. ``Is Automation the Answer? The Computational Complexity of Automated Redistricting'', Rutgers Computer and Technology Law Journal 23 (1), 81-142 http://www.hmdc.harvard.edu/micah_altman/pubpapers.shtml
Altman, M. 1998. Modeling the Effect of Mandatory District Compactness on Partisan Gerrymanders, Political Geography 17:989-1012.
C. Cirincione , T.A. Darling, and T.G. O'Rourke. 2000. ``Assessing South Carolina's 1990's Congressional Districting.'' Political Geography 19: 189-211.
Micah Altman and Michael P. McDonald. 2004. A Computation Intensive Method for Detecting Gerrymanders Paper presented at the annual meeting of the The Midwest Political Science Association, Palmer House Hilton, Chicago, Illinois, Apr 15, 2004. http://www.allacademic.com/meta/p83108_index.html
Micah Altman, Karin Mac Donald, and Michael P. McDonald, 2005. ``From Crayons to Computers: The Evolution of Computer Use in Redistricting'' Social Science Computer Review 23(3): 334-46.
Plan refinement alogrithms refineGreedyPlan
, refineAnnealPlan
, refineGenoudPlan
, refineNelderPlan
Cluster computing: startBardCluster
suffolk.map <- importBardShape( file.path(system.file("shapefiles", package="BARD"),"suffolk_tracts") ) numberdists <- 5 kplan <- createKmeansPlan(suffolk.map,numberdists) rplan <- createRandomPlan(suffolk.map,numberdists) rplan2 <- createRandomPopPlan(suffolk.map,numberdists) myScore<-function(plan,...) { return(calcContiguityScore(plan,...)) } samples<-samplePlans(kplan, score.fun=myScore, ngenplans=20, gen.fun = "createRandomPlan", refine.fun="refineNelderPlan",refine.args=list(maxit=200,dynamicscoring=TRUE)) profplans<-profilePlans( list(kplan,rplan), score.fun=calcContiguityScore, addscore.fun=calcPopScore, numevals=2, weight=c(0,.5,1), refine.fun="refineNelderPlan",refine.args=list(maxit=200,dynamicscoring=TRUE) ) summary(samples) plot(summary(samples)) reportPlans(samples) plot(summary(profplans))