Professor Melissa Hamilton
About
Research interests
Interdisciplinary research on issues related to domestic and sexual violence, trauma responses in victims of assault, risk assessment practices, policing, sentencing, and corrections.
Teaching
Criminal Justice
Domestic Violence & the Law
Affiliations
State Bar of Texas
American Psychological Association
American Psychology-Law Association (Research Committee member)
Institute on Domestic Violence and Sexual Assault
Association of Threat Assessment Professionals
International Corrections and Prisons Association
(Past member) Risk Assessment Task Force, National Association of Criminal Defense Lawyers
Royal Statistical Society
Editorial Board, Advancing Corrections
News
In the media
Ten Things to Know About Sexual Assault on Campus
Publications
The legal definition of child pornography is, at best, unclear. In part because of this ambiguity and in part because of the nature of the crime itself, the prosecution and sentencing of perpetrators, the protection of and restitution for victims, and the means for preventing repeat offenses are deeply controversial. In an effort to clarify the questions and begin to formulate answers, in this volume, experts in law, sociology, and social examine child pornography law and its consequences. Focusing on the roles of language and crime definition, the contributors present a range of views about the increasingly visible role that child pornography plays in the national conversation on child safety, as well as the wisdom of the punishment of those who produce, distribute, and possess materials which may be considered child pornography.
Matthew Sepi, a 20-year-old combat veteran who had been deployed in Iraq, headed out to a local convenience store in Las Vegas in 2005 concealing an AK-47 under his clothing in case it was necessary to protect himself in the neighborhood that was known for violence and crime.' At one point a man and a woman approached him in a dark alley, ordering Sepi to leave the area. Feeling he was being ambushed by enemy troops, Sepi instinctively reacted by "engag[ing] his targets" and shooting at them. Once the individuals appeared immobilized from the gunshots, Sepi followed training protocol in "breaking contact" with the enemies and retreating. Both individuals were shot and one of them died of gunshot wounds. Sepi was charged with murder and attempted murder.
Last year, Blomberg,Mestre, andMann (2013) in Criminology & Public Policy called on criminologists to embolden themselves to offer the best empirical research to inform public policy discussions concerning criminal justice issues, even if their research cannot show causality. The main research article in this segment represents a wonderful example of such a contribution. Kaiser and Spohn’s (2014, this issue) research directly confronts an area of criminal justice in current turmoil because of doctrinal and moral policy disputes. The realm is the federal sentencing system. Created by Congress in the Sentencing Reform Act of 1984, the U.S. Sentencing Commission was tasked with the responsibility of establishing presumptive sentencing guidelines to direct sentencing judges in determining a reasonable sentence. A goal of the reform legislation was to foster consistency in sentencing practices and thereby reduce unwarranted disparities. Yet the U.S. Supreme Court untethered the presumptive sentencing guideline regime in the case of United States v. Booker in 2005 when it remedied a constitutional error it found plagued the guideline structure by rendering the guideline system advisory in nature. Federal district judges were given further leeway when the Supreme Court in Kimbrough v. United States (2007) ruled the judiciary could reject guideline recommendations based on a policy disagreement. Tension has existed ever since these rulings in terms of a power struggle for determining reasonable punishments, spawning discussions and debates among researchers, academics, practitioners, and policy makers about how to repair the discord and, perhaps more importantly, meliorate policies...
The Drug War ushered in harsh sentencing practices in the United States. The severity in penalties has been particularly salient in the federal criminal justice system. Increased statutory penalties and U.S. Sentencing Commission guidelines led to drug users and traffickers serving longer periods of incarceration. As a result, the federal correctional system is overburdened. A noticeable change in attitude is evident. Congress has offered leniency for certain first-time drug offenders in the form of a statutory safety valve. While a progressive step, the safety valve applies to relatively few individuals. Importantly, federal judges have some discretion to reject what they might consider to be overly lengthy sentencing mandates. This Article provides an empirical study of sentencing statistics for drug offences. The sample derives from the U.S. Sentencing Commission’s fiscal year 2019 dataset of over 20,000 cases sentenced for drug crimes. Results show that judges employed various mechanisms to reduce statutory- and guidelines-based penalties. Strategies by judges include avoiding mandatory minimums (using the safety valve and otherwise), giving greater point reductions than permitted, and rejecting Commission policies. Over 60% of sentences were below the guidelines’ minimum recommendations. The consequences are beneficial in alleviating strain on the federal prison population, but create inconsistency in sentencing practices. A qualitative component supplements the quantitative. Judges, when issuing their statement of reasons for the sentence, may include textual comments. These comments provide valuable contextual information in how judges articulate their concerns with sanctions for drug offenders. Overall results present important policy considerations.
Algorithmic risk assessment tools are informed by scientific research concerning which factors are predictive of recidivism and thus support the evidence‐based practice movement in criminal justice. Automated assessments of individualized risk (low, medium, high) permit officials to make more effective management decisions. Computer-generated algorithms appear to be objective and neutral. But are these algorithms actually fair? The focus herein is on gender equity. Studies confirm that women typically have far lower recidivism rates than men. This differential raises the question of how well algorithmic outcomes fare in terms of predictive parity by gender. This essay reports original research using a large dataset of offenders who were scored on the popular risk assessment tool COMPAS. Findings indicate that COMPAS performs reasonably well at discriminating between recidivists and non‐recidivists for men and women. Nonetheless, COMPAS algorithmic outcomes systemically overclassify women in higher risk groupings. Multiple measures of algorithmic equity and predictive accuracy are provided to support the conclusion that this algorithm is sexist.
This chapter addresses a type of excessive legislation called overcriminalisation by reviewing the main political philosophies that define which acts are eligible for criminalisation in the first place. It outlines how overcriminalisation may occur in contemporary society and review the negative consequences thereof. The chapter analyses the new coercive control legislation by enquiring into whether it is an appropriate focus of criminal law, or if it represents a case study for overcriminalisation and discusses the concept of excessive legislation. Echoing harm theorists, legislation “ought in all cases whatever scrupulously to respect privacy”. The distinguishing nature of coercive control legislation is that it eschews the incident-based model. Coercive control is the term coined to describe an abuser’s ongoing and systematic strategy to attain and maintain power and control in an intimate partner relationship. Various advocates for a coercive control offence justify it in legally moralistic terms.
Matthew Sepi, a 20-year-old combat veteran who had been deployed in Iraq, headed out to a local convenience store in Las Vegas in 2005 concealing an AK-47 under his clothing in case it was necessary to protect himself in the neighborhood that was known for violence and crime.' At one point a man and a woman approached him in a dark alley, ordering Sepi to leave the area. Feeling he was being ambushed by enemy troops, Sepi instinctively reacted by "engag[ing] his targets" and shooting at them. Once the individuals appeared immobilized from the gunshots, Sepi followed training protocol in "breaking contact" with the enemies and retreating. Both individuals were shot and one of them died of gunshot wounds. Sepi was charged with murder and attempted murder.
The admission of hearsay qualifying as an excited utterance, present sense impression, or statement about mental and bodily conditions is an exception to the general rule of inadmissibility for hearsay statements. Evidence scholars explain these exceptions as being presumably reliable statements as they are generally contemporaneous with an event at issue such that faults with memory and time to lie are remedied. These three exceptions have been particularly depended upon in cases of interpersonal violence in which victims are considered to honestly complain during the occurrence of the assault and in its immediate aftermath. Nonetheless, much recent research in interdisciplinary circles highlights that the impact of trauma has varied consequences upon subjects' abilities to accurately and fully articulate what just transpired. Concurrent neurophysiological reactions to traumatic stress can mediate, alter, or entirely thwart one's capacity to conceptualize internally, and to clearly verbalize externally, the violent attack. Thus, unlike the hearsay exceptions' presumption of accuracy, a surfeit of scientific knowledge now shows that violence victims may-or may not-issue holistic and reliable reports in the near term. On the other hand, empirical studies reject the notion that it takes more than a blink of an eye to fabricate a story. Evidence law is often intransigent in its reliance upon folk psychological assumptions about human behavior. Yet with legal scholars and practitioners increasingly embracing the benefits that scientific knowledge can bring to the law, the time may be ripe to reconsider these three hearsay exceptions. In light of recent studies drawing from neurology, physiology, and psychology principles and research designs in trauma studies, the goal of evidence law in terms of preventing unreliable testimony can only benefit thereby.
Various sectors of criminal justice systems around the world are increasing their use of artificial intelligence (AI), but AI has played a key part in offender assessment decisions in the UK, US, Australia, Canada and other countries for more than 20 years, with little or no transparency or independent inspection of the algorithms involved, as University of Surrey Professor Melissa Hamilton and Professor Pamela Ugwudike of the University of Southampton explain.
The Supreme Court may soon hear a case on data-driven criminal sentencing. Research suggests that algorithms are not as good as we think they are at making these decisions.
The United States has earned its nickname as a mass incarceration nation. The federal criminal justice system has contributed to this status with its own increasing rate of incarceration. The federal system now ranks as the largest population of sentenced prisoners in the country; it is even larger than the national prisoner populations among all European countries, save one. This is a recent phenomenon. This Article ties the increase in the federal incarceration rate to policies adopted by the U.S. Sentencing Commission since its inception that presume imprisonment as the default sentence. Since the Sentencing Commission’s creation in 1984, the proportion of federal sentences requiring incarceration increased from under 50% to over 90%. This Article provides evidence that the prison-by-default position by the Sentencing Commission is contrary to congressional intent when the Legislature passed sentencing reform laws in the 1980s, has contributed to a federal prison system that is operating over capacity, and wastes resources. The increasing rate of imprisonment at the federal level conflicts with the downward trend in national crime rates and with the states’ sentencing experiences in which probation sentences continue to be preferred. Potential alternative explanations for the significant trend toward the affirmative use of imprisonment in federal sentences are outlined, yet the available statistical evidence generally rules them out. Finally, suggestions on changes to the sentencing guidelines and to judicial sentencing practices are offered.
A diverse band of politicians, justice officials, and academic commentators are lending their voices to the hot topic of correcting the United States’ status as the world’s leader in mass incarceration. There is limited focus, though, upon the special role that life sentences play in explaining the explosion in prison populations and the dramatic rise in costs that result from providing for the increased needs of aging lifers. This Article highlights various ways in which those serving life sentences occupy unique legal and political statuses. For instance, life sentences are akin to capital punishment in likely resulting in death within prison environs, yet enjoy few of the added procedural rights and intensity of review that capital defendants command. In contrast to term prisoners, lifers cannot expect to reenter civil society and thus represent an exclusionist ideological agenda. The Article reviews whether life penalties remain justified by fundamental theories of punishment in light of new evidence on retributive values, deterrence effects, and recidivism risk. It also situates life sentences within an international moral imperative that reserves life penalties, if permitted at all, for the most heinous offenders, and in any event, demands periodic review of all long-term prison sentences. This Article also provides a novel perspective by presenting an empirical study to further investigate the law and practice of life sentences. Utilizing federal datasets, descriptive statistics, and a multiple regression analysis offers important insights. The study makes an original contribution to the literature by exploring the salience of certain facts and circumstances (including demographic, offense-related, and case-processing variables) in accounting for life sentence outcomes in the federal system. While some of the attributes of life sentenced defendants are consistent with current expectations, others might be surprising. For example, as expected, sentencing guideline recommendations, the presence of mandatory minimums, and greater criminal history predicted life sentences. Results also supported the existence of a trial penalty. On the other hand, lifers in the federal system were not representative of the most violent offenders or worst recidivists. Life sentences were issued across a variety of violent and nonviolent crimes, and in recent years a substantial percentage presented with minimal criminal histories. Regional disparities in the use of life sentences were also indicated. In concluding, this Article reviews potential remedies to the overreliance upon life penalties in the American justice system.
Pretrial detention has become normative in contemporary criminal justice, rather than the exception to a rule of release for individuals not convicted of any crime. Even the opportunity for release with a bond amount often eludes the many individuals who are unable to afford to pay. Defendants detained pending trial suffer numerous negative consequences to their own legal cases, such as being more likely to feel pressured to plead guilty and to receive a prison sentence. The high numbers of those detained appear to disproportionately impact minorities and have contributed to mass incarceration. As a result of these issues, the country is in the midst of a third reform movement in terms of policies to increase the rate of pretrial release without financial surety and to incorporate algorithmic risk assessment tools to isolate the few individuals who pose a high likelihood of failure if released pending trial. This Article offers a case study of an important site engaged in pretrial reforms. The research deploys a dataset of defendants booked into jail in Cook County, Illinois (home to Chicago). The study provides an empirical exploration of how the outcome of pretrial detention may be associated with radai and gender disparities and whether any such disparities are ameliorated when considering a host of legal factors that are predictive of pretrial detention. A related research question is how the use of an algorithmic risk tool modifies the relationship between pretrial detention and a combination of demographic factors and judicial decisions about release. Policy implications of the results are informative to debates concerning pretrial reforms in terms of whether risk assessment tools offer the ability to reduce racial/ethnic and gender disparities and to decrease the detention rate. Potential contributions such as this study are timely considering the experiment with decarceration due to COVID-19 concerns which has not been associated with an increased risk to public safety.
Risk assessment tools driven by algorithms offer promising advantages in predicting the recidivism risk of defendants. Jurisdictions are increasingly relying upon risk tool outcomes to help judges at sentencing with their decisions on whether to incarcerate or whether to use community‐based sanctions. Yet as sentencing has significant consequences for public safety and individual rights, care must be taken that the tools relied upon are appropriate for the task. Judges are encouraged to act as gatekeepers to evaluate whether the forensic risk assessment tool offered has a sufficient level of validity in that it is fit for the purposes of sentencing, provides an acceptable level of accuracy in its predictions, and achieves an adequate standard of reliability with regard to its outcomes.
In this Essay, Professor Hamilton considers the recent use by Dallas police officers of a robot armed with plastic explosives to kill a suspected gunman on a shooting rampage. In the wake of Dallas, many legal experts in the news maintained that the police action was constitutional. The commentators' consensus was that as long as the police had the right to use lethal force, then the means of that force is irrelevant. This Essay argues the contrary. Under the current state of the constitutional law on the police use of force on a suspected felon, excessive lethal force is a valid consideration. The type and magnitude of lethal force may, under certain circumstances, be unconstitutional despite the suspect posing a high degree of risk to others.
Adult female targets of domestic violence by male perpetrators have commonly been described as helpless and passive. This is consistent with the criminal justice system's perception that true “victims” have little culpability or agency in a violent assault. Otherwise, the “victims” are more likely to be defined as participants in the violent act, and thus unworthy of official protection. This study examines court opinions involving convictions of male offenders of domestic violence against their female partners and ex-partners. The purpose is to understand the development of judicial knowledge as to whether women in relationships with violent men are socially constructed as worthy and legitimate victims of violence. The 60+ appellate case opinions in the analysis are those where a California trial court accepted expert testimony on domestic violence in prosecuting the male offenders to explain the women's actions regarding their violent relationships. California was chosen because of the state's progressive and unique evidentiary statutes that permit a broad range of evidence in criminal prosecutions of domestic violence, including expert witnesses. In reviewing the judicial opinions that comprise the corpus, I found that an underlying assumption evident in the judicial discourses is that abused women would, should or could easily exercise agency in ending an abusive relationship and, once it was ended, refuse to reengage in their abusive relationships. Using critical discourse analysis, this study shows that, in constructing women's agency in resisting abusive relationships, judicial discourse tended to rely more heavily upon expert testimony and, in a few cases, on prosecutorial arguments, than on the testimony (i.e. voice) of the female victims themselves. In this process, the women's voices were silenced or marginalized as experts’ constructions of victimized women were preferred.
On July 7 Dallas officials used a robot to kill the suspected shooter of five police officers. The homicide of the suspect was precipitated by a series of events. After a prolonged gun battle and hours of negotiations with police, the suspect, then holed up in a confined area of a college building, continued to threaten to kill more and claimed to have planted bombs nearby. Investigating officers decided the time had come to incapacitate him. The police secured a one-pound chunk of C4 plastic explosive material to the extendable arm of a militarized robot. Then, they sent the robot into the confined area where the suspect was concealing himself. The police officer operating the robot remotely blew up the attached explosive, and as intended, the suspect was immediately killed. This is believed to be the first time in domestic policing that a robot carried out an act of lethal force.
Risk assessment algorithms lie at the heart of criminal justice reform to tackle mass incarceration. The newest application of risk tools centers on the pretrial stage as a means to reduce both reliance upon wealth-based bail systems and rates of pretrial detention. Yet the ability of risk assessment to achieve the reform movement’s goals will be challenged if the risk tools do not perform equitably for minorities. To date, little is known about the racial fairness of these algorithms as they are used in the field. This Article offers an original empirical study of a popular risk assessment tool to evaluate its race-based performance. The case study is novel in employing a two-sample design with large datasets from diverse jurisdictions, one with a supermajority white population and the other a supermajority Black population. Statistical analyses examine whether, in these jurisdictions, the algorithmic risk tool results in disparate impact, exhibits test bias, or displays differential validity in terms of unequal performance metrics for white versus Black defendants. Implications of the study results are informative to the broader knowledge base about risk assessment practices in the field. Results contribute to the debate about the topic of algorithmic fairness in an important setting where one’s liberty interests may be infringed despite not being adjudicated guilty of any crime.
Domestic violence arrests have been historically focused on protecting women and children from abusive men. Arrest patterns continue to reflect this bias with more men arrested for domestic violence compared to women. Such potential gender variations in arrest patterns pave the way to the investigation of disparities by sex of the offender in domestic violence arrests. This study utilizes data from a quantitative dataset that includes responses by police officers who completed a specially mandated checklist after responding to a domestic dispute. The results showed that while females are arrested quite often in domestic disputes, there remains a significant difference in the arrest outcome whereby male suspects were more likely to be arrested than female suspects. Regression models further indicated differences based on sex and certain predictors of arrest, which supported sex-based rationales in arrests for domestic violence.
Algorithmic risk assessment is hailed as offering criminal justice officials a science-led system to triage offender populations to better manage low- versus high-risk individuals. Risk algorithms have reached the pretrial world as a best practices method to aid in reforms to reduce reliance upon money bail and to moderate pretrial detention’s material contribution to mass incarceration. Still, these promises are elusive if algorithmic tools are unable to achieve sufficient accurate rates in predicting criminal justice failure. This article presents an empirical study of the most popular pretrial risk tool used in the United States. Developers promote the Public Safety Assessment (PSA) as a national tool. Little information is known about the PSA’s developmental methodologies or performance statistics. The dearth of intelligence is alarming as the tool is being used in high-stakes decisions as to whether to detain individuals who have not yet been convicted of any crime. This study uncovers evidence of performance accuracy using a variety of validity metrics and, as a novel contribution, investigates the use of the tool in three diverse jurisdictions to evaluate how well the tool generalizes in real-world settings. Policy implications of the findings may be enlightening to officials, practitioners, and other stakeholders interested in pretrial justice as well as in the use of algorithmic risk across criminal justice decision points.
In late 2016, U.S. Court of Appeals for the Sixth Circuit’s concluded in Does #1–5 v. Snyder that Michigan’s sex offender registry and residency re-striction law constituted an ex post facto punishment in violation of the constitu-tion. In its decision, the Sixth Circuit engaged with scientific evidence that re-futes moralized judgments about sex offenders, specifically that they pose a unique and substantial risk of recidivism. This Essay is intended to highlight the importance of Snyder as an example of the appropriate use of scientific studies in constitutional law.
Congress continues to push for harsher sentencings for child pornography cases, likely due to the polarizing opinion that those convicted of this offense perhaps are or will become child molesters. However, federal judges are more often of the opinion that the sentencing guidelines are too severe and do not provide flexibility depending on the case specifics. This Article first contextualizes the history of the sentencing expansions and discusses cases that raise different issues on both sides of the argument.
Lone-actor terrorist attacks are on the rise in the Western world in terms of numbers and severity. Public officials are eager for an evidence-based tool to assess the risk that individuals pose for terroristic involvement. Yet actuarial models of risk validated for ordinary criminal violence are unsuitable to terrorism. Lone-actor terrorists vary dramatically in their socio-psychological profiles and the base rate of terrorism is too low for actuarial modeling to achieve statistical significance. This Article proposes a new conceptual model for the terroristic threat assessment of individuals. Unlike risk assessment that is founded upon numerical probabilities, this threat assessment considers possibilistic thinking and considers the often idiosyncratic ideologies and strategies of lone-actor terrorists. The conceptual threat assessment model connects three overlapping foundations: (a) structured professional judgment concerning an individual’s goals, capabilities, and susceptibility to extremist thought, plus the imminence of a potential terroristic attack; (b) a multidisciplinary intelligence team engaging collective imaginaries of an otherwise unknown future of terrorism events; and (c) coordination between counterintelligence officials and academic communities to share data and conduct more research on lone-actor terrorists utilizing a systematic case study approach and engaging theoretical methodologies to inform about potential new ideological motivations and terroristic strategies which may be emerging due to cultural, environmental, and political drivers.
Algorithmic risk assessment is hailed as offering criminal justice officials a science-led system to triage offender populations to better manage low- versus high-risk individuals. Risk algorithms have reached the pretrial world as a best practices method to aid in reforms to reduce reliance upon money bail and to moderate pretrial detention’s material contribution to mass incarceration. Still, these promises are elusive if algorithmic tools are unable to achieve sufficient accurate rates in predicting criminal justice failure. This article presents an empirical study of the most popular pretrial risk tool used in the United States. Developers promote the Public Safety Assessment (PSA) as a national tool. Little information is known about the PSA’s developmental methodologies or performance statistics. The dearth of intelligence is alarming as the tool is being used in high-stakes decisions as to whether to detain individuals who have not yet been convicted of any crime. This study uncovers evidence of performance accuracy using a variety of validity metrics and, as a novel contribution, investigates the use of the tool in three diverse jurisdictions to evaluate how well the tool generalizes in real-world settings. Policy implications of the findings may be enlightening to officials, practitioners, and other stakeholders interested in pretrial justice as well as in the use of algorithmic risk across criminal justice decision points.
Lone-actor terrorist attacks are on the rise in the Western world in terms of numbers and severity. Public officials are eager for an evidence-based tool to assess the risk that individuals pose for terroristic involvement. Yet actuarial models of risk validated for ordinary criminal violence are unsuitable to terrorism. Lone-actor terrorists vary dramatically in their socio-psychological profiles and the base rate of terrorism is too low for actuarial modeling to achieve statistical significance. This Article proposes a new conceptual model for the terroristic threat assessment of individuals. Unlike risk assessment that is founded upon numerical probabilities, this threat assessment considers possibilistic thinking and considers the often idiosyncratic ideologies and strategies of lone-actor terrorists. The conceptual threat assessment model connects three overlapping foundations: (a) structured professional judgment concerning an individual’s goals, capabilities, and susceptibility to extremist thought, plus the imminence of a potential terroristic attack; (b) a multidisciplinary intelligence team engaging collective imaginaries of an otherwise unknown future of terrorism events; and (c) coordination between counterintelligence officials and academic communities to share data and conduct more research on lone-actor terrorists utilizing a systematic case study approach and engaging theoretical methodologies to inform about potential new ideological motivations and terroristic strategies which may be emerging due to cultural, environmental, and political drivers.
The Supreme Court will soon decide if North Carolina’s ban on the use of social networking websites by registered sex offenders is constitutional.1 The case is Packingham v. North Carolina and oral arguments were heard in February 2017. The principal legal issue in the case is whether the ban violates the First Amendment’s right to freedom of speech.
A new arena inviting collaboration between the law and sciences has emerged in criminal justice. The nation’s economic struggles and its record-breaking rate of incarceration have encouraged policymakers to embrace a new penology which seeks to simultaneously curb prison populations, reduce recidivism, and improve public safety. The new penology draws upon the behavioral sciences for techniques to identify and classify individuals based on their potential future risk and for current best evidence to inform decisions on how to manage offender populations accordingly. Empirically driven practices have been utilized in many criminal justice contexts for years, yet have historically remained “a largely untapped resource” in sentencing decisions. One reason is that sentencing law in America has for some time been largely driven by retributive theories.The new penology clearly incorporates utilitarian goals and welcomes an interdisciplinary approach to meet them.
Criminal justice stakeholders are strongly concerned with disparities in penalty outcomes. Disparities are problematic when they represent unfounded differences in sentences, an abuse of discretion, and/or potential discrimination based on sociodemographic characteristics. The Article presents an original empirical study that explores disparities in sentences at two levels: the individual case level and the regional level. More specifically, the study investigates upward departures in the United States’ federal sentencing system, which constitutes a guidelines-based structure. Upward departures carry unique consequences to individuals and their effects on the system as they lead to lengthier sentences, symbolically represent a dispute with the guidelines advice, and contribute to mass incarceration. Upward departures are discretionary to district courts and thus may lead to disparities in sentencing in which otherwise seemingly like offenders receive dissimilar sentences, in part because of the tendency of their assigned judges to depart upward (or not). The study utilizes a multilevel mixed model to test the effects of a host of explanatory factors on the issuance of upward departures at the case level and whether those same factors are significant at the group level-i.e., district courts-to determine the extent of variation across districts. The explanatory variables tested include legal factors (e.g., final offense level, criminal history, offense type), extralegal characteristics (e.g., gender, race/ethnicity, citizenship), and case-processing variables (e.g., trial penalty, custody status). The results indicate that various legal and nonlegal factors are relevant in individual cases (representing individual differences) and signify that significant variations across district courts exist (confirming regional disparities). Implications of the significant findings for the justice system are explored.
... Part V offers a review of case law involving the role of the two actuarial assessment tools in SVP status cases, including an assessment of how courts have responded to Daubert-and Frye-based challenges to the instruments. ... Hence, with the Supreme Court's approval of expert predictions of future violence in death penalty cases, and with the majority's reference to expert assessments of the risk of violence in civil commitments, it seems reasonable to extrapolate Barefoot's general conclusion to future dangerousness assessments of sex offenders. ... To develop the experience table, the developer used the sexual recidivism rates observed in seven follow-up studies of released sex offenders in the United States, Canada, and England. ... The STATIC-99 instrument includes 10 static factors : Age at assessment: Number of prior sentencing 0 = 25 years or older dates: 1 = between 18 and 25 years 0 = 3 or less 1 = 4 or more Having lived with an age-appropriate Any convictions for a non- intimate partner for at least 2 years: contact sexual offense: 0 = yes; 1 = no 0 = no; 1 = yes Any convictions for an Index non-sexual Any nonfamilial victims: violent offense: 0 = no; 1 = yes 1 = yes; 0 = no Any convictions for non-sexual violence Any stranger victims: before the Index (most recent sexual 0= no; 1 = yes offense) offense: 1 = yes; 0 = no Number of prior sex offenses: Any male victims: 0 = none 0 = no; 1 = yes 1 = 1-2 charges or 1 conviction 2 = 3-5 charges or 2-3 convictions 3 = > 6 charges or > 4 convictions For STATIC-99, total scores range from 0 to 12, arranged within seven risk categories organized into four ordinal risk groups (from 0 = low risk to 6+ = high risk). ... It is highly questionable whether there ever was - and even more questionable whether there is today - a general acceptance in the mental health field about the validity of using actuarial risk assessments in SVP legal determinations. ... Judicial Perspectives on Future Dangerousness Evidence Since the Supreme Court approved mental health testimony about future dangerousness and found civil commitment of sexual predators and registration laws to be constitutional, the introduction of actuarial risk assessments through expert testimony has become common practice in SVP determinations. ... In the case, the prosecutor had argued that even low scores from actuarial tools are sufficient to constitute the legal standard of "likely" to reoffend: Even taking the expert's tests, the RRASOR, about 11% failure rate after ten years. ... And, finally, some argue that the legislatures created sexual predator laws with the future dangerousness concept and, thus, we (mental health experts, judges, and lawyers) need to use the best available evidence to make those decision - even if the legal standards remain vague, and even though current models of actuarial risk assessment suffer large gaps in validity and reliability.
A ‘black box’ AI system has been influencing criminal justice decisions for more than two decades – it is time to open it up. Justice systems around the world are using artificial intelligence (AI) to assess people with criminal convictions. These AI technologies rely on machine learning algorithms and their key purpose is to predict the risk of reoffending. But critics say that a lack of access to the data, as well as other crucial information required for independent evaluation, raises serious questions of accountability and transparency. THIS IS AN ARTICLE FROM A PRINT JOURNAL. CONTACT THE KAI TEAM TO REQUEST A COPY library@police.govt.nz