Skip to content
Snippets Groups Projects
Commit 794bd040 authored by Christos Christodoulopoulos's avatar Christos Christodoulopoulos
Browse files

Added JUnit tests

Removed unnecessary dependencies
Switched to illinois-nlp-pipeline-0.1.2
Minor fixes
parent e94d6880
No related branches found
Tags v5.1
No related merge requests found
Showing
with 79 additions and 11925 deletions
Version 5.1
Added JUnit tests
Removed unnecessary dependencies
Switched to illinois-nlp-pipeline-0.1.2
Minor fixes
Version 5.0
TODO (standalone + plus cleanup + ready to add Prep)
Standalone SRL using illinois-nlp-pipeline
Version 4.1.1
Switched to edison-0.7.1 and LBJava-1.0
......
* Problem sentences
1. I gave the dog food. I gave the dog much food.
The second sentence is not analyzed at all. If the two sentences
are given to the SRL independently, the arguments are correctly
identified.
2. I gave the dog food. I gave the dog a lot of food.
The second gave is identified as an AM-MOD. When the second
sentence is given alone, this doesn't happen.
3. "I gave the dog food. " vs. "I gave the dog a lot of food."
The first sentence is analyzed incorrectly. Can this be fixed
using proper subcategorization or maybe with the correct sense of
the predicate?
* Nom SRL predicates
We need a better way to pick NOM SRL predicates. Try the following
sentences:
1. His thought was that the building is falling.
/thought/ is missed because the POS tagger does not identify it
as a noun.
<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
lang="en" xml:lang="en">
<head>
<title>The Illinois SRL Manual</title>
<meta http-equiv="Content-Type" content="text/html;charset=iso-8859-1"/>
<meta name="generator" content="Org-mode"/>
<meta name="generated" content=""/>
<meta name="author" content="Vivek Srikumar"/>
<meta name="description" content=""/>
<meta name="keywords" content=""/>
<style type="text/css">
<!--/*--><![CDATA[/*><!--*/
html { font-family: Times, serif; font-size: 12pt; }
.title { text-align: center; }
.todo { color: red; }
.done { color: green; }
.tag { background-color: #add8e6; font-weight:normal }
.target { }
.timestamp { color: #bebebe; }
.timestamp-kwd { color: #5f9ea0; }
.right {margin-left:auto; margin-right:0px; text-align:right;}
.left {margin-left:0px; margin-right:auto; text-align:left;}
.center {margin-left:auto; margin-right:auto; text-align:center;}
p.verse { margin-left: 3% }
pre {
border: 1pt solid #AEBDCC;
background-color: #F3F5F7;
padding: 5pt;
font-family: courier, monospace;
font-size: 90%;
overflow:auto;
}
table { border-collapse: collapse; }
td, th { vertical-align: top; }
th.right { text-align:center; }
th.left { text-align:center; }
th.center { text-align:center; }
td.right { text-align:right; }
td.left { text-align:left; }
td.center { text-align:center; }
dt { font-weight: bold; }
div.figure { padding: 0.5em; }
div.figure p { text-align: center; }
textarea { overflow-x: auto; }
.linenr { font-size:smaller }
.code-highlighted {background-color:#ffff00;}
.org-info-js_info-navigation { border-style:none; }
#org-info-js_console-label { font-size:10px; font-weight:bold;
white-space:nowrap; }
.org-info-js_search-highlight {background-color:#ffff00; color:#000000;
font-weight:bold; }
/*]]>*/-->
</style>
<link rel="stylesheet" type="text/css" href="style.css" />
<script type="text/javascript">
<!--/*--><![CDATA[/*><!--*/
function CodeHighlightOn(elem, id)
{
var target = document.getElementById(id);
if(null != target) {
elem.cacheClassElem = elem.className;
elem.cacheClassTarget = target.className;
target.className = "code-highlighted";
elem.className = "code-highlighted";
}
}
function CodeHighlightOff(elem, id)
{
var target = document.getElementById(id);
if(elem.cacheClassElem)
elem.className = elem.cacheClassElem;
if(elem.cacheClassTarget)
target.className = elem.cacheClassTarget;
}
/*]]>*///-->
</script>
</head>
<body>
<div id="content">
<h1 class="title">The Illinois SRL Manual</h1>
<div id="table-of-contents">
<h2>Table of Contents</h2>
<div id="text-table-of-contents">
<ul>
<li><a href="#sec-1">1 Introduction </a></li>
<li><a href="#sec-2">2 Installation and usage </a>
<ul>
<li><a href="#sec-2_1">2.1 Getting started </a></li>
<li><a href="#sec-2_2">2.2 Configuration </a></li>
<li><a href="#sec-2_3">2.3 Modes of use </a>
<ul>
<li><a href="#sec-2_3_1">2.3.1 As a Curator plugin </a></li>
<li><a href="#sec-2_3_2">2.3.2 As a batch annotator </a></li>
<li><a href="#sec-2_3_3">2.3.3 Interactive mode </a></li>
</ul></li>
</ul>
</li>
<li><a href="#sec-3">3 Papers that used this software </a></li>
<li><a href="#sec-4">4 References </a></li>
</ul>
</div>
</div>
<div id="outline-container-1" class="outline-2">
<h2 id="sec-1"><span class="section-number-2">1</span> Introduction </h2>
<div class="outline-text-2" id="text-1">
<p>The Illinois SRL implements the single-parse Semantic Role Labeler
that is described in (Punyakanonk, et. al. 2008). Using a similar
approach, it also implements a nominal SRL system for deverbal nouns
in Nombank (See (Meyers 2007) for a detailed description of this
class.)
</p>
<p>
This re-implementation is entirely in Java and achieves an
equivalent performance on the test set of the Penn Treebank as
described in the paper. Using parse trees from the Charniak parser,
the original work achieves an average F1 of 76.29%. In comparison, ,
this re-implementation gets an F1 of 76.47% with beam search (which
is comparable to the performance when ILP inference is used). The
nominal SRL gets an F1 score of 66.97% with beam search.
</p>
<p>
<b>Citing this work</b> To come soon.
</p>
</div>
</div>
<div id="outline-container-2" class="outline-2">
<h2 id="sec-2"><span class="section-number-2">2</span> Installation and usage </h2>
<div class="outline-text-2" id="text-2">
</div>
<div id="outline-container-2_1" class="outline-3">
<h3 id="sec-2_1"><span class="section-number-3">2.1</span> Getting started </h3>
<div class="outline-text-3" id="text-2_1">
<p>After downloading the archive containing the SRL system, unpack it
and run <code>srl.sh -v -i</code>. This will start the verb SRL system in the
interactive mode, where you can enter sentences on the command line
and get it verb semantic role labels. For nominal semantic role
labeling, replace <code>-v</code> with <code>-n</code>. For the first sentence alone,
the system will take a long time to load the model to the
memory. Subsequent sentences will be faster. Note that this system
requires nearly 10 GB of RAM for verb SRL and about 5 GB for
nominals.
</p>
<p>
If this works you are all set. You can now use the semantic role
labeler in one of three modes: as a curator plugin, as a batch
annotator and in the interactive mode.
</p>
</div>
</div>
<div id="outline-container-2_2" class="outline-3">
<h3 id="sec-2_2"><span class="section-number-3">2.2</span> Configuration </h3>
<div class="outline-text-3" id="text-2_2">
<p>Most of the configuration to the SRL system can be provided via a
config file. The configuration file can be specified via the
command line option <code>-c &lt;config-file&gt;</code>. If this option is not
specified, the system looks for the file <code>srl-config.properties</code> in
the same directory.
</p>
<p>
Here is a summary of the configuration options:
</p>
<ol>
<li>
<i>CuratorHost</i>: Specifies the host of the curator instance which
provides the various inputs to the SRL system.
</li>
<li>
<i>CuratorPort</i>: Specifies the port on which the curator is
listening on <i>CuratorHost</i>.
</li>
<li>
<i>DefaultParser</i>: This can either be <code>Charniak</code> or
<code>Stanford</code>. This selects the constituent parser that provides
the features for the SRL system. It is assumed that the parser
corresponding to the choice here is provided by the
Curator. (Note: The SRL system has been trained using the
Charniak parser.)
</li>
<li>
<i>WordNetConfig</i>: Specifies the xml file that provides the
configuration for Java WordNet Library(JWNL). An example
configuration file is provided as <code>jwnl_properties.xml</code>. The
path to the WordNet dictionary should be set in this file.
<pre class="src src-xml">&lt;<span style="color: #0000ff;">param</span> <span style="color: #a0522d;">name</span>=<span style="color: #8b2252;">"</span><span style="color: #8b2252;">dictionary_path</span><span style="color: #8b2252;">"</span> <span style="color: #a0522d;">value</span>=<span style="color: #8b2252;">"</span><span style="color: #8b2252;">/path/to/wordnet/dict/here</span><span style="color: #8b2252;">"</span>/&gt;
</pre>
</li>
<li>
<i>LoadWordNetConfigFromClassPath</i>: Specifies whether the WordNet
config file specified in <i>WordNetConfig</i> should be loaded from
the classpath. This property can take either <code>true</code> or <code>false</code>
values. If <code>true</code>, the system will look for the WordNet
configuration file in the classpath. If <code>false</code> or if the
property is not present, it loads the file from the filesystem.
</li>
<li>
<i>Inference</i>: This can either be <code>BeamSearch</code> or <code>ILP</code> and decides
the inference algorithm that is used to make the final
prediction. If the choice is <code>BeamSearch</code>, in in-built beam
search engine is used for inference. If the choice is <code>ILP</code>,
then the Gurobi ILP solver will be used. (Note: To use ILP
inference, the Gurobi engine needs to be configured.)
</li>
<li>
<i>BeamSize</i>: Specifies the beam size if beam search inference is
chosen. Otherwise, this option is ignored.
</li>
<li>
<i>TrimLeadingPrepositions</i>: Should the leading prepositions of
arguments be trimmed. If this is set to true, then a sentence
like "John bought a car from Mary on Thursday for 2000 dollars."
would be analyzed as "bought(A0:John, A1: the car, A2: Mary, A3:
2000 dollars, AM-TMP: Thursday)". If this is set to false (or if
the argument is not present), then the leading prepositions are
included. This gives "bought(A0:John, A1: the car, A2: from
Mary, A3: for 2000 dollars, AM-TMP: on Thursday)" This option
applies for both verbs and nouns.
</li>
</ol>
</div>
</div>
<div id="outline-container-2_3" class="outline-3">
<h3 id="sec-2_3"><span class="section-number-3">2.3</span> Modes of use </h3>
<div class="outline-text-3" id="text-2_3">
<p>For all three modes, either <code>-v</code> or <code>-n</code> argument is required to
indicate verb or nominal SRL respectively.
</p>
</div>
<div id="outline-container-2_3_1" class="outline-4">
<h4 id="sec-2_3_1"><span class="section-number-4">2.3.1</span> As a Curator plugin </h4>
<div class="outline-text-4" id="text-2_3_1">
<p>To start the SRL system as a curator plugin, run the following command:
</p>
<pre class="src src-sh">./srl.sh [-v |-n ] -s &lt;port-number&gt; [-t &lt;number-of-threads&gt;]
</pre>
<p>
The number of threads need not be specified and defaults to using
one thread.
</p>
<p>
After the server starts, the curator instance can be configured to
use this to serve SRL outputs. The following XML snippet should be
added on to the curator annotator descriptor file (with appropriate
type, host and port entries):
</p>
<pre class="src src-xml">&lt;<span style="color: #0000ff;">annotator</span>&gt;
&lt;<span style="color: #0000ff;">type</span>&gt;parser&lt;/<span style="color: #0000ff;">type</span>&gt;
&lt;<span style="color: #0000ff;">field</span>&gt;srl&lt;/<span style="color: #0000ff;">field</span>&gt;
&lt;<span style="color: #0000ff;">host</span>&gt;srl-host:srlport&lt;/<span style="color: #0000ff;">host</span>&gt;
&lt;<span style="color: #0000ff;">requirement</span>&gt;sentences&lt;/<span style="color: #0000ff;">requirement</span>&gt;
&lt;<span style="color: #0000ff;">requirement</span>&gt;tokens&lt;/<span style="color: #0000ff;">requirement</span>&gt;
&lt;<span style="color: #0000ff;">requirement</span>&gt;pos&lt;/<span style="color: #0000ff;">requirement</span>&gt;
&lt;<span style="color: #0000ff;">requirement</span>&gt;ner&lt;/<span style="color: #0000ff;">requirement</span>&gt;
&lt;<span style="color: #0000ff;">requirement</span>&gt;chunk&lt;/<span style="color: #0000ff;">requirement</span>&gt;
&lt;<span style="color: #0000ff;">requirement</span>&gt;charniak&lt;/<span style="color: #0000ff;">requirement</span>&gt;
&lt;/<span style="color: #0000ff;">annotator</span>&gt;
</pre>
</div>
</div>
<div id="outline-container-2_3_2" class="outline-4">
<h4 id="sec-2_3_2"><span class="section-number-4">2.3.2</span> As a batch annotator </h4>
<div class="outline-text-4" id="text-2_3_2">
<p>The SRL system can be used to annotate several sentences as a batch
by running it on an input file with a set of sentences. Running the
SRL in this form produces a CoNLL style column format with the SRL
annotation.
</p>
<p>
The following command runs the SRL in batch mode:
</p>
<pre class="src src-sh">./srl.sh [-v | -n ] -b &lt;input-file&gt; -o &lt;output-file&gt; [-w]
</pre>
<p>
Each line in the input file is treated as a separate sentence. The
option <code>-w</code> indicates that the sentences in the input file are
whitespace tokenized. Otherwise, the curator is asked to provide
the tokenization.
</p>
</div>
</div>
<div id="outline-container-2_3_3" class="outline-4">
<h4 id="sec-2_3_3"><span class="section-number-4">2.3.3</span> Interactive mode </h4>
<div class="outline-text-4" id="text-2_3_3">
<p>The SRL system can be used in an interactive mode by running it
with the <code>-i</code> option.
</p>
</div>
</div>
</div>
</div>
<div id="outline-container-3" class="outline-2">
<h2 id="sec-3"><span class="section-number-2">3</span> Papers that used this software </h2>
<div class="outline-text-2" id="text-3">
<p>The following papers have used an earlier version of this software:
</p>
<ul>
<li>
G. Kundu and D. Roth, <i>Adapting Text Instead of the Model: An Open Domain Approach</i>. In Proc. of the Conference of Computational
Natural Language Learning, 2011.
</li>
<li>
V. Srikumar and D. Roth, A Joint Model for Extended Semantic Role
Labeling. Proceedings of the Conference on Empirical Methods in
Natural Language Processing (EMNLP), 2011.
</li>
</ul>
<p>
If you use this package, please let me know and I will add the
reference to this list here.
</p>
</div>
</div>
<div id="outline-container-4" class="outline-2">
<h2 id="sec-4"><span class="section-number-2">4</span> References </h2>
<div class="outline-text-2" id="text-4">
<ol>
<li>
V. Punyakanok, D. Roth and W. Yih, <i>The importance of Syntactic Parsing and Inference in Semantic Role Labeling</i>. Computational
Linguistics, 2008.
</li>
<li>
A. Meyers. <i>Those other nombank dictionaries</i>. Technical report,
Technical report, New York University, 2007.
</li>
</ol>
</div>
</div>
<div id="postamble">
<p class="author"> Author: Vivek Srikumar
</p>
<p class="date"> Date: </p>
<p class="creator">HTML generated by org-mode 7.4 in emacs 23</p>
</div>
</div>
</body>
</html>
#+TITLE: The Illinois SRL Manual
#+AUTHOR: Vivek Srikumar
#+EMAIL: vsrikum2@uiuc.edu
#+DATE:
#+LANGUAGE: en
#+OPTIONS: H:3 num:t toc:t \n:nil @:t ::t |:t ^:t -:t f:t *:t <:t
#+OPTIONS: TeX:t LaTeX:t skip:nil d:nil todo:t pri:nil tags:not-in-toc
#+INFOJS_OPT: view:nil toc:nil ltoc:t mouse:underline buttons:0 path:http://orgmode.org/org-info.js
#+EXPORT_SELECT_TAGS: export
#+EXPORT_EXCLUDE_TAGS: noexport
#+LINK_UP:
#+LINK_HOME:
#+XSLT:
#+STYLE: <link rel="stylesheet" type="text/css" href="style.css" />
* Introduction
The Illinois SRL implements the single-parse Semantic Role Labeler
that is described in (Punyakanonk, et. al. 2008). Using a similar
approach, it also implements a nominal SRL system for deverbal nouns
in Nombank (See (Meyers 2007) for a detailed description of this
class.)
This re-implementation is entirely in Java and achieves an
equivalent performance on the test set of the Penn Treebank as
described in the paper. Using parse trees from the Charniak parser,
the original work achieves an average F1 of 76.29%. In comparison, ,
this re-implementation gets an F1 of 76.47% with beam search (which
is comparable to the performance when ILP inference is used). The
nominal SRL gets an F1 score of 66.97% with beam search.
*Citing this work* To come soon.
* Installation and usage
** Getting started
After downloading the archive containing the SRL system, unpack it
and run =srl.sh -v -i=. This will start the verb SRL system in the
interactive mode, where you can enter sentences on the command line
and get it verb semantic role labels. For nominal semantic role
labeling, replace =-v= with =-n=. For the first sentence alone,
the system will take a long time to load the model to the
memory. Subsequent sentences will be faster. Note that this system
requires nearly 10 GB of RAM for verb SRL and about 5 GB for
nominals.
If this works you are all set. You can now use the semantic role
labeler in one of three modes: as a curator plugin, as a batch
annotator and in the interactive mode.
** Configuration
Most of the configuration to the SRL system can be provided via a
config file. The configuration file can be specified via the
command line option =-c <config-file>=. If this option is not
specified, the system looks for the file =srl-config.properties= in
the same directory.
Here is a summary of the configuration options:
1. /CuratorHost/: Specifies the host of the curator instance which
provides the various inputs to the SRL system.
2. /CuratorPort/: Specifies the port on which the curator is
listening on /CuratorHost/.
3. /DefaultParser/: This can either be =Charniak= or
=Stanford=. This selects the constituent parser that provides
the features for the SRL system. It is assumed that the parser
corresponding to the choice here is provided by the
Curator. (Note: The SRL system has been trained using the
Charniak parser.)
4. /WordNetConfig/: Specifies the xml file that provides the
configuration for Java WordNet Library(JWNL). An example
configuration file is provided as =jwnl_properties.xml=. The
path to the WordNet dictionary should be set in this file.
#+BEGIN_SRC xml
<param name="dictionary_path" value="/path/to/wordnet/dict/here"/>
#+END_SRC
5. /LoadWordNetConfigFromClassPath/: Specifies whether the WordNet
config file specified in /WordNetConfig/ should be loaded from
the classpath. This property can take either =true= or =false=
values. If =true=, the system will look for the WordNet
configuration file in the classpath. If =false= or if the
property is not present, it loads the file from the filesystem.
6. /Inference/: This can either be =BeamSearch= or =ILP= and decides
the inference algorithm that is used to make the final
prediction. If the choice is =BeamSearch=, in in-built beam
search engine is used for inference. If the choice is =ILP=,
then the Gurobi ILP solver will be used. (Note: To use ILP
inference, the Gurobi engine needs to be configured.)
7. /BeamSize/: Specifies the beam size if beam search inference is
chosen. Otherwise, this option is ignored.
8. /TrimLeadingPrepositions/: Should the leading prepositions of
arguments be trimmed. If this is set to true, then a sentence
like "John bought a car from Mary on Thursday for 2000 dollars."
would be analyzed as "bought(A0:John, A1: the car, A2: Mary, A3:
2000 dollars, AM-TMP: Thursday)". If this is set to false (or if
the argument is not present), then the leading prepositions are
included. This gives "bought(A0:John, A1: the car, A2: from
Mary, A3: for 2000 dollars, AM-TMP: on Thursday)" This option
applies for both verbs and nouns.
** Modes of use
For all three modes, either =-v= or =-n= argument is required to
indicate verb or nominal SRL respectively.
*** As a Curator plugin
To start the SRL system as a curator plugin, run the following command:
#+BEGIN_SRC sh
./srl.sh [-v |-n ] -s <port-number> [-t <number-of-threads>]
#+END_SRC
The number of threads need not be specified and defaults to using
one thread.
After the server starts, the curator instance can be configured to
use this to serve SRL outputs. The following XML snippet should be
added on to the curator annotator descriptor file (with appropriate
type, host and port entries):
#+BEGIN_SRC xml
<annotator>
<type>parser</type>
<field>srl</field>
<host>srl-host:srlport</host>
<requirement>sentences</requirement>
<requirement>tokens</requirement>
<requirement>pos</requirement>
<requirement>ner</requirement>
<requirement>chunk</requirement>
<requirement>charniak</requirement>
</annotator>
#+END_SRC
*** As a batch annotator
The SRL system can be used to annotate several sentences as a batch
by running it on an input file with a set of sentences. Running the
SRL in this form produces a CoNLL style column format with the SRL
annotation.
The following command runs the SRL in batch mode:
#+BEGIN_SRC sh
./srl.sh [-v | -n ] -b <input-file> -o <output-file> [-w]
#+END_SRC
Each line in the input file is treated as a separate sentence. The
option =-w= indicates that the sentences in the input file are
whitespace tokenized. Otherwise, the curator is asked to provide
the tokenization.
*** Interactive mode
The SRL system can be used in an interactive mode by running it
with the =-i= option.
* Papers that used this software
The following papers have used an earlier version of this software:
- G. Kundu and D. Roth, /Adapting Text Instead of the Model: An Open
Domain Approach/. In Proc. of the Conference of Computational
Natural Language Learning, 2011.
- V. Srikumar and D. Roth, A Joint Model for Extended Semantic Role
Labeling. Proceedings of the Conference on Empirical Methods in
Natural Language Processing (EMNLP), 2011.
If you use this package, please let me know and I will add the
reference to this list here.
* References
1. V. Punyakanok, D. Roth and W. Yih, /The importance of Syntactic
Parsing and Inference in Semantic Role Labeling/. Computational
Linguistics, 2008.
2. A. Meyers. /Those other nombank dictionaries/. Technical report,
Technical report, New York University, 2007.
File deleted
% Created 2011-10-21 Fri 14:46
\documentclass[11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{fixltx2e}
\usepackage{graphicx}
\usepackage{longtable}
\usepackage{float}
\usepackage{wrapfig}
\usepackage{soul}
\usepackage{textcomp}
\usepackage{marvosym}
\usepackage{wasysym}
\usepackage{latexsym}
\usepackage{amssymb}
\usepackage{hyperref}
\tolerance=1000
\providecommand{\alert}[1]{\textbf{#1}}
\begin{document}
\title{The Illinois SRL Manual}
\author{Vivek Srikumar}
\date{}
\maketitle
\setcounter{tocdepth}{3}
\tableofcontents
\vspace*{1cm}
\section{Introduction}
\label{sec-1}
The Illinois SRL implements the single-parse Semantic Role Labeler
that is described in (Punyakanonk, et. al. 2008). Using a similar
approach, it also implements a nominal SRL system for deverbal nouns
in Nombank (See (Meyers 2007) for a detailed description of this
class.)
This re-implementation is entirely in Java and achieves an
equivalent performance on the test set of the Penn Treebank as
described in the paper. Using parse trees from the Charniak parser,
the original work achieves an average F1 of 76.29\%. In comparison, ,
this re-implementation gets an F1 of 76.47\% with beam search (which
is comparable to the performance when ILP inference is used). The
nominal SRL gets an F1 score of 66.97\% with beam search.
\textbf{Citing this work} To come soon.
\section{Installation and usage}
\label{sec-2}
\subsection{Getting started}
\label{sec-2_1}
After downloading the archive containing the SRL system, unpack it
and run \texttt{srl.sh -v -i}. This will start the verb SRL system in the
interactive mode, where you can enter sentences on the command line
and get it verb semantic role labels. For nominal semantic role
labeling, replace \texttt{-v} with \texttt{-n}. For the first sentence alone,
the system will take a long time to load the model to the
memory. Subsequent sentences will be faster. Note that this system
requires nearly 10 GB of RAM for verb SRL and about 5 GB for
nominals.
If this works you are all set. You can now use the semantic role
labeler in one of three modes: as a curator plugin, as a batch
annotator and in the interactive mode.
\subsection{Configuration}
\label{sec-2_2}
Most of the configuration to the SRL system can be provided via a
config file. The configuration file can be specified via the
command line option \texttt{-c <config-file>}. If this option is not
specified, the system looks for the file \texttt{srl-config.properties} in
the same directory.
Here is a summary of the configuration options:
\begin{enumerate}
\item \emph{CuratorHost}: Specifies the host of the curator instance which
provides the various inputs to the SRL system.
\item \emph{CuratorPort}: Specifies the port on which the curator is
listening on \emph{CuratorHost}.
\item \emph{DefaultParser}: This can either be \texttt{Charniak} or
\texttt{Stanford}. This selects the constituent parser that provides
the features for the SRL system. It is assumed that the parser
corresponding to the choice here is provided by the
Curator. (Note: The SRL system has been trained using the
Charniak parser.)
\item \emph{WordNetConfig}: Specifies the xml file that provides the
configuration for Java WordNet Library(JWNL). An example
configuration file is provided as \texttt{jwnl\_properties.xml}. The
path to the WordNet dictionary should be set in this file.
\begin{verbatim}
<param name="dictionary_path" value="/path/to/wordnet/dict/here"/>
\end{verbatim}
\item \emph{LoadWordNetConfigFromClassPath}: Specifies whether the WordNet
config file specified in \emph{WordNetConfig} should be loaded from
the classpath. This property can take either \texttt{true} or \texttt{false}
values. If \texttt{true}, the system will look for the WordNet
configuration file in the classpath. If \texttt{false} or if the
property is not present, it loads the file from the filesystem.
\item \emph{Inference}: This can either be \texttt{BeamSearch} or \texttt{ILP} and decides
the inference algorithm that is used to make the final
prediction. If the choice is \texttt{BeamSearch}, in in-built beam
search engine is used for inference. If the choice is \texttt{ILP},
then the Gurobi ILP solver will be used. (Note: To use ILP
inference, the Gurobi engine needs to be configured.)
\item \emph{BeamSize}: Specifies the beam size if beam search inference is
chosen. Otherwise, this option is ignored.
\item \emph{TrimLeadingPrepositions}: Should the leading prepositions of
arguments be trimmed. If this is set to true, then a sentence
like ``John bought a car from Mary on Thursday for 2000 dollars.''
would be analyzed as ``bought(A0:John, A1: the car, A2: Mary, A3:
2000 dollars, AM-TMP: Thursday)''. If this is set to false (or if
the argument is not present), then the leading prepositions are
included. This gives ``bought(A0:John, A1: the car, A2: from
Mary, A3: for 2000 dollars, AM-TMP: on Thursday)'' This option
applies for both verbs and nouns.
\end{enumerate}
\subsection{Modes of use}
\label{sec-2_3}
For all three modes, either \texttt{-v} or \texttt{-n} argument is required to
indicate verb or nominal SRL respectively.
\subsubsection{As a Curator plugin}
\label{sec-2_3_1}
To start the SRL system as a curator plugin, run the following command:
\begin{verbatim}
./srl.sh [-v |-n ] -s <port-number> [-t <number-of-threads>]
\end{verbatim}
The number of threads need not be specified and defaults to using
one thread.
After the server starts, the curator instance can be configured to
use this to serve SRL outputs. The following XML snippet should be
added on to the curator annotator descriptor file (with appropriate
type, host and port entries):
\begin{verbatim}
<annotator>
<type>parser</type>
<field>srl</field>
<host>srl-host:srlport</host>
<requirement>sentences</requirement>
<requirement>tokens</requirement>
<requirement>pos</requirement>
<requirement>ner</requirement>
<requirement>chunk</requirement>
<requirement>charniak</requirement>
</annotator>
\end{verbatim}
\subsubsection{As a batch annotator}
\label{sec-2_3_2}
The SRL system can be used to annotate several sentences as a batch
by running it on an input file with a set of sentences. Running the
SRL in this form produces a CoNLL style column format with the SRL
annotation.
The following command runs the SRL in batch mode:
\begin{verbatim}
./srl.sh [-v | -n ] -b <input-file> -o <output-file> [-w]
\end{verbatim}
Each line in the input file is treated as a separate sentence. The
option \texttt{-w} indicates that the sentences in the input file are
whitespace tokenized. Otherwise, the curator is asked to provide
the tokenization.
\subsubsection{Interactive mode}
\label{sec-2_3_3}
The SRL system can be used in an interactive mode by running it
with the \texttt{-i} option.
\section{Papers that used this software}
\label{sec-3}
The following papers have used an earlier version of this software:
\begin{itemize}
\item G. Kundu and D. Roth, \emph{Adapting Text Instead of the Model: An Open Domain Approach}. In Proc. of the Conference of Computational
Natural Language Learning, 2011.
\item V. Srikumar and D. Roth, A Joint Model for Extended Semantic Role
Labeling. Proceedings of the Conference on Empirical Methods in
Natural Language Processing (EMNLP), 2011.
\end{itemize}
If you use this package, please let me know and I will add the
reference to this list here.
\section{References}
\label{sec-4}
\begin{enumerate}
\item V. Punyakanok, D. Roth and W. Yih, \emph{The importance of Syntactic Parsing and Inference in Semantic Role Labeling}. Computational
Linguistics, 2008.
\item A. Meyers. \emph{Those other nombank dictionaries}. Technical report,
Technical report, New York University, 2007.
\end{enumerate}
\end{document}
\ No newline at end of file
* Nominalizations
** Nombank dictionary statistics
- The following data comes from NOMLEX-plus-clean-1.0.
| Nominalization Class | Count |
|----------------------+-------|
| NOM | 3934 |
| NOMLIKE | 1244 |
| NOMING | 359 |
| ABLE-NOM | 18 |
|----------------------+-------|
| NOMADJ | 503 |
| NOMADJLIKE | 142 |
|----------------------+-------|
| PARTITIVE | 509 |
| ATTRIBUTE | 417 |
| RELATIONAL | 331 |
| WORK-OF-ART | 188 |
| ABILITY | 112 |
| ENVIRONMENT | 91 |
| GROUP | 84 |
| HALLMARK | 38 |
| JOB | 28 |
| VERSION | 21 |
| TYPE | 17 |
| EVENT | 12 |
| SHARE | 12 |
| ISSUE | 11 |
| CRISSCROSS | 7 |
| FIELD | 6 |
|----------------------+-------|
| Total | 8084 |
- Grouping these per type, we get
| Nominalization type | Count | Percent |
|---------------------+-------+---------|
| Verbal | 5555 | 68.7 |
| Adjectival | 645 | 8.0 |
| Other | 1884 | 23.3 |
|---------------------+-------+---------|
| Total | 8084 | |
** Nombank training data statistics
- Statistics for distribution of NOMLEX classes over the training data
| Type | Number of occurrences |
|---------------+-----------------------|
| NOM | 50612 |
| NOMLIKE | 22583 |
| PARTITIVE | 11841 |
| ATTRIBUTE | 10456 |
| RELATIONAL | 7527 |
| ABILITY | 5904 |
| WORK_OF_ART | 3852 |
| GROUP | 3633 |
| NOMADJ | 3280 |
| NOMING | 3252 |
| SHARE | 3192 |
| ENVIRONMENT | 3108 |
| NOMADJLIKE | 2700 |
| JOB | 1502 |
| ISSUE | 1031 |
| VERSION | 725 |
| CRISSCROSS | 346 |
| FIELD | 330 |
| HALLMARK | 299 |
| EVENT | 172 |
| ABLE_NOM | 60 |
| UNKNOWN_CLASS | 1 |
- Statistics for distribution of NOMLEX classes for predicates that
have at least one of the four Deverbal classes
| Type | Number of occurrences |
|-------------+-----------------------|
| NOM | 50612 |
| NOMLIKE | 22583 |
| ATTRIBUTE | 4805 |
| PARTITIVE | 3998 |
| NOMING | 3252 |
| SHARE | 2646 |
| ABILITY | 2583 |
| WORK_OF_ART | 2134 |
| RELATIONAL | 2090 |
| NOMADJLIKE | 1489 |
| ENVIRONMENT | 1094 |
| GROUP | 1078 |
| JOB | 933 |
| NOMADJ | 650 |
| ISSUE | 592 |
| VERSION | 543 |
| TYPE | 328 |
| FIELD | 251 |
| CRISSCROSS | 212 |
| HALLMARK | 164 |
| ABLE_NOM | 60 |
* Version 3.0.2
For timing the systems, the experiments were conducted on a machine
that has 2x 6-Core Intel Xeon E5645 Processor with 12MB cache and
clock speed of 2.40 GHz. ILP inference was done using Gurobi v4. The
beam search does use the multi-core nature of the processor.
** Verb
*** Memory and time considerations
- Memory: At least 5.5 GB main memory
- Time for inference:
- ILP inference on 2400 sentences took 197467 ms
- Beam search on 2400 sentences took 163934 ms
*** Performance: ILP inference
- Number of Sentences : 2416
- Number of Propositions : 5267
- Percentage of perfect props : 50.56
corr. excess missed prec. rec. F1
------------------------------------------------------------
Overall 10610 3312 3467 76.21 75.37 75.79
----------
A0 3086 530 477 85.34 86.61 85.97
A1 3792 960 1135 79.80 76.96 78.36
A2 686 316 424 68.46 61.80 64.96
A3 94 53 79 63.95 54.34 58.75
A4 74 30 28 71.15 72.55 71.84
A5 3 4 2 42.86 60.00 50.00
AM 0 4 0 0.00 0.00 0.00
AM-ADV 264 260 242 50.38 52.17 51.26
AM-CAU 33 42 40 44.00 45.21 44.59
AM-DIR 36 39 49 48.00 42.35 45.00
AM-DIS 246 106 74 69.89 76.88 73.21
AM-EXT 14 12 18 53.85 43.75 48.28
AM-LOC 192 178 171 51.89 52.89 52.39
AM-MNR 192 181 152 51.47 55.81 53.56
AM-MOD 524 49 27 91.45 95.10 93.24
AM-NEG 207 29 23 87.71 90.00 88.84
AM-PNC 44 70 71 38.60 38.26 38.43
AM-PRD 1 6 4 14.29 20.00 16.67
AM-REC 0 1 2 0.00 0.00 0.00
AM-TMP 789 327 298 70.70 72.59 71.63
R-A0 182 47 42 79.48 81.25 80.35
R-A1 108 53 48 67.08 69.23 68.14
R-A2 6 4 10 60.00 37.50 46.15
R-A3 0 1 1 0.00 0.00 0.00
R-A4 0 0 1 0.00 0.00 0.00
R-AM-ADV 0 0 2 0.00 0.00 0.00
R-AM-CAU 1 0 3 100.00 25.00 40.00
R-AM-EXT 0 0 1 0.00 0.00 0.00
R-AM-LOC 8 1 13 88.89 38.10 53.33
R-AM-MNR 2 2 4 50.00 33.33 40.00
R-AM-TMP 26 7 26 78.79 50.00 61.18
------------------------------------------------------------
V 5259 8 8 99.85 99.85 99.85
------------------------------------------------------------
*** Performance: Beam search
- Number of Sentences : 2416
- Number of Propositions : 5267
- Percentage of perfect props : 50.39
corr. excess missed prec. rec. F1
------------------------------------------------------------
Overall 10420 3009 3657 77.59 74.02 75.77
----------
A0 3045 490 518 86.14 85.46 85.80
A1 3702 903 1225 80.39 75.14 77.68
A2 664 289 446 69.67 59.82 64.37
A3 91 45 82 66.91 52.60 58.90
A4 75 27 27 73.53 73.53 73.53
A5 3 2 2 60.00 60.00 60.00
AM 0 1 0 0.00 0.00 0.00
AM-ADV 263 244 243 51.87 51.98 51.92
AM-CAU 33 36 40 47.83 45.21 46.48
AM-DIR 35 31 50 53.03 41.18 46.36
AM-DIS 244 93 76 72.40 76.25 74.28
AM-EXT 14 13 18 51.85 43.75 47.46
AM-LOC 187 160 176 53.89 51.52 52.68
AM-MNR 189 153 155 55.26 54.94 55.10
AM-MOD 518 47 33 91.68 94.01 92.83
AM-NEG 207 31 23 86.97 90.00 88.46
AM-PNC 47 62 68 43.12 40.87 41.96
AM-PRD 1 5 4 16.67 20.00 18.18
AM-REC 0 1 2 0.00 0.00 0.00
AM-TMP 777 286 310 73.10 71.48 72.28
R-A0 178 27 46 86.83 79.46 82.98
R-A1 105 42 51 71.43 67.31 69.31
R-A2 5 6 11 45.45 31.25 37.04
R-A3 0 1 1 0.00 0.00 0.00
R-A4 0 0 1 0.00 0.00 0.00
R-AM-ADV 0 0 2 0.00 0.00 0.00
R-AM-CAU 2 0 2 100.00 50.00 66.67
R-AM-EXT 0 0 1 0.00 0.00 0.00
R-AM-LOC 8 2 13 80.00 38.10 51.61
R-AM-MNR 1 3 5 25.00 16.67 20.00
R-AM-TMP 26 9 26 74.29 50.00 59.77
------------------------------------------------------------
V 5258 10 9 99.81 99.83 99.82
------------------------------------------------------------
** Nominalizations
*** Memory and time considerations
- Memory: At least 4 GB main memory
- Time for inference:
- ILP inference on 2400 sentences took 78835 ms
- Beam search on 2400 sentences took 81746 ms
*** Performance: ILP inference
- Number of Sentences : 2416
- Number of Propositions : 3793
- Percentage of perfect props : 40.55
corr. excess missed prec. rec. F1
------------------------------------------------------------
Overall 4646 1632 2981 74.00 60.92 66.82
----------
A0 1188 304 589 79.62 66.85 72.68
A1 1629 513 997 76.05 62.03 68.33
A2 598 191 408 75.79 59.44 66.63
A3 138 39 86 77.97 61.61 68.83
A4 5 9 12 35.71 29.41 32.26
A5 1 0 0 100.00 100.00 100.00
A8 2 2 3 50.00 40.00 44.44
A9 0 0 2 0.00 0.00 0.00
AM-ADV 4 7 17 36.36 19.05 25.00
AM-CAU 0 1 0 0.00 0.00 0.00
AM-DIR 1 2 1 33.33 50.00 40.00
AM-DIS 0 0 2 0.00 0.00 0.00
AM-EXT 20 14 14 58.82 58.82 58.82
AM-LOC 102 68 97 60.00 51.26 55.28
AM-MNR 216 98 125 68.79 63.34 65.95
AM-NEG 18 4 11 81.82 62.07 70.59
AM-PNC 3 0 8 100.00 27.27 42.86
AM-TMP 279 78 123 78.15 69.40 73.52
R-A0 4 3 27 57.14 12.90 21.05
R-A1 4 3 12 57.14 25.00 34.78
R-A2 0 1 7 0.00 0.00 0.00
R-A3 0 0 2 0.00 0.00 0.00
R-A4 0 0 1 0.00 0.00 0.00
SUP 434 295 437 59.53 49.83 54.25
------------------------------------------------------------
V 2513 195 148 92.80 94.44 93.61
------------------------------------------------------------
*** Performance: Beam Search
- Number of Sentences : 2416
- Number of Propositions : 3793
- Percentage of perfect props : 39.52
corr. excess missed prec. rec. F1
------------------------------------------------------------
Overall 4566 1633 3061 73.66 59.87 66.05
----------
A0 1181 360 596 76.64 66.46 71.19
A1 1585 499 1041 76.06 60.36 67.30
A2 590 190 416 75.64 58.65 66.07
A3 136 35 88 79.53 60.71 68.86
A4 4 10 13 28.57 23.53 25.81
A5 0 0 1 0.00 0.00 0.00
A8 2 2 3 50.00 40.00 44.44
A9 0 1 2 0.00 0.00 0.00
AM-ADV 4 7 17 36.36 19.05 25.00
AM-CAU 0 1 0 0.00 0.00 0.00
AM-DIR 1 1 1 50.00 50.00 50.00
AM-DIS 0 0 2 0.00 0.00 0.00
AM-EXT 20 10 14 66.67 58.82 62.50
AM-LOC 100 66 99 60.24 50.25 54.79
AM-MNR 213 97 128 68.71 62.46 65.44
AM-NEG 17 2 12 89.47 58.62 70.83
AM-PNC 3 0 8 100.00 27.27 42.86
AM-TMP 274 71 128 79.42 68.16 73.36
R-A0 4 3 27 57.14 12.90 21.05
R-A1 4 1 12 80.00 25.00 38.10
R-A2 0 0 7 0.00 0.00 0.00
R-A3 0 0 2 0.00 0.00 0.00
R-A4 0 0 1 0.00 0.00 0.00
SUP 428 277 443 60.71 49.14 54.31
------------------------------------------------------------
V 2523 205 138 92.49 94.81 93.64
------------------------------------------------------------
body { text-align: center }
#content {
margin: auto auto;
text-align: left;
width: 700px;
color: #2e2e2e;
font-family: Arial, Helvetica Neue, Helvetica, sans-serif;
font-size: 12pt;
text-align: left;
}
#menu {
margin-top: 70px;
padding-top: 10px;
color: #000;
font-size: 14pt;
padding-bottom: 5px;
border: 1px #bababa solid;
}
#wrapper {
padding-top: 40px;
border-left: 1px #bababa solid;
border-right: 1px #bababa solid;
padding-left: 20px;
padding-right: 20px;
}
#sidetitle {
color: #000;
font-size: 14pt;
padding-right: 20px;
text-align: right;
text-transform: lowercase;
}
#sidemenu {
text-transform: lowercase;
float: left;
padding-left: 20px;
}
sup { line-height: 1px }
h1 {
color: #000;
font-weight: normal;
font-size: 24pt;
clear: both;
}
#table-of-contents {
font-size: 12pt;
line-height: 180%;
margin-bottom: 30px;
overflow: auto;
}
#table-of-contents h2 { display: none }
#table-of-contents ul {
display: inline;
margin: 0;
padding: 0;
}
#table-of-contents ul li {
list-style-type: none;
/*display: inline;*/
}
#table-of-contents ul li a {
text-decoration: none;
margin: 3px;
color: #000;
padding: 0 0.5em;
/*float: left;*/
background-color: #fff;
white-space: nowrap;
}
#table-of-contents ul li a:hover {
background-color: #a2b5cd;
color: #000;
}
#table-of-contents ul li ul { font-size: 11pt }
#table-of-contents ul li ul li a {
color: #000;
background-color: #fff;
margin-left: 30px;
}
#table-of-contents ul li ul li a:hover {
Background-color: #bcd2ee;
}
#table-of-contents ul li ul li ul { font-size: 9pt }
#table-of-contents ul li ul li ul li a {
color: #000;
background-color: #e0eeee;
}
.title {
text-align: left;
margin-bottom: 15px;
}
hr {
border: 0;
height: 1px;
color: #bababa;
background-color: #bababa;
}
h2 {
font-size: 14pt;
color: #000;
clear: both;
padding-bottom: 2px;
margin-top: 30px;
margin-bottom: 20px;
border-bottom: 1px solid #bababa;
}
h3 {
font-size: 12pt;
color: #000;
}
h4 { font-size: 11pt }
a {
text-decoration: none;
color: #4a708b;
}
a:hover { text-decoration: underline }
.todo { color: #ff0000 }
.done { color: #006666 }
.timestamp-kwd { color: #444 }
.tag {
color: #000;
background-color: #fff;
font-variant: small-caps;
font-size: 80%;
font-weight: 500;
}
.timestamp-wrapper { font-size: 80% }
.timestamp { color: #555 }
code { font-size: 10pt }
table { border: 1px solid #bababa }
pre {
border: 1px solid #555;
background: #EEEEEE;
font-size: 9pt;
padding: 1em;
}
img { border: none }
.share img {
opacity: .4;
-moz-opacity: .4;
filter: alpha(opacity=40);
}
.share img:hover {
opacity: 1;
-moz-opacity: 1;
filter: alpha(opacity=100);
}
.org-info-search-highlight {
background-color: #adefef; /* same color as emacs default */
color: #000000;
font-weight: bold;
}
.org-bbdb-company {
/* bbdb-company */
font-style: italic;
}
.org-bbdb-field-name { }
.org-bbdb-field-value { }
.org-bbdb-name {
/* bbdb-name */
text-decoration: underline;
}
.org-bold {
/* bold */
font-weight: bold;
}
.org-bold-italic {
/* bold-italic */
font-weight: bold;
font-style: italic;
}
.org-border {
/* border */
background-color: #000000;
}
.org-buffer-menu-buffer {
/* buffer-menu-buffer */
font-weight: bold;
}
.org-builtin {
/* font-lock-builtin-face */
color: #da70d6;
}
.org-button {
/* button */
text-decoration: underline;
}
.org-c-nonbreakable-space {
/* c-nonbreakable-space-face */
background-color: #ff0000;
font-weight: bold;
}
.org-calendar-today {
/* calendar-today */
text-decoration: underline;
}
.org-comment {
/* font-lock-comment-face */
color: #b22222;
}
.org-comment-delimiter {
/* font-lock-comment-delimiter-face */
color: #b22222;
}
.org-constant {
/* font-lock-constant-face */
color: #5f9ea0;
}
.org-cursor {
/* cursor */
background-color: #000000;
}
.org-default {
/* default */
color: #000000;
background-color: #ffffff;
}
.org-diary {
/* diary */
color: #ff0000;
}
.org-doc {
/* font-lock-doc-face */
color: #bc8f8f;
}
.org-escape-glyph {
/* escape-glyph */
color: #a52a2a;
}
.org-file-name-shadow {
/* file-name-shadow */
color: #7f7f7f;
}
.org-fixed-pitch { }
.org-fringe {
/* fringe */
background-color: #f2f2f2;
}
.org-function-name {
/* font-lock-function-name-face */
color: #0000ff;
}
.org-header-line {
/* header-line */
color: #333333;
background-color: #e5e5e5;
}
.org-help-argument-name {
/* help-argument-name */
font-style: italic;
}
.org-highlight {
/* highlight */
background-color: #b4eeb4;
}
.org-holiday {
/* holiday */
background-color: #ffc0cb;
}
.org-info-header-node {
/* info-header-node */
color: #a52a2a;
font-weight: bold;
font-style: italic;
}
.org-info-header-xref {
/* info-header-xref */
color: #0000ff;
text-decoration: underline;
}
.org-info-menu-header {
/* info-menu-header */
font-weight: bold;
}
.org-info-menu-star {
/* info-menu-star */
color: #ff0000;
}
.org-info-node {
/* info-node */
color: #a52a2a;
font-weight: bold;
font-style: italic;
}
.org-info-title-1 {
/* info-title-1 */
font-size: 172%;
font-weight: bold;
}
.org-info-title-2 {
/* info-title-2 */
font-size: 144%;
font-weight: bold;
}
.org-info-title-3 {
/* info-title-3 */
font-size: 120%;
font-weight: bold;
}
.org-info-title-4 {
/* info-title-4 */
font-weight: bold;
}
.org-info-xref {
/* info-xref */
color: #0000ff;
text-decoration: underline;
}
.org-isearch {
/* isearch */
color: #b0e2ff;
background-color: #cd00cd;
}
.org-italic {
/* italic */
font-style: italic;
}
.org-keyword {
/* font-lock-keyword-face */
color: #a020f0;
}
.org-lazy-highlight {
/* lazy-highlight */
background-color: #afeeee;
}
.org-link {
/* link */
color: #0000ff;
text-decoration: underline;
}
.org-link-visited {
/* link-visited */
color: #8b008b;
text-decoration: underline;
}
.org-match {
/* match */
background-color: #ffff00;
}
.org-menu { }
.org-message-cited-text {
/* message-cited-text */
color: #ff0000;
}
.org-message-header-cc {
/* message-header-cc */
color: #191970;
}
.org-message-header-name {
/* message-header-name */
color: #6495ed;
}
.org-message-header-newsgroups {
/* message-header-newsgroups */
color: #00008b;
font-weight: bold;
font-style: italic;
}
.org-message-header-other {
/* message-header-other */
color: #4682b4;
}
.org-message-header-subject {
/* message-header-subject */
color: #000080;
font-weight: bold;
}
.org-message-header-to {
/* message-header-to */
color: #191970;
font-weight: bold;
}
.org-message-header-xheader {
/* message-header-xheader */
color: #0000ff;
}
.org-message-mml {
/* message-mml */
color: #228b22;
}
.org-message-separator {
/* message-separator */
color: #a52a2a;
}
.org-minibuffer-prompt {
/* minibuffer-prompt */
color: #0000cd;
}
.org-mm-uu-extract {
/* mm-uu-extract */
color: #006400;
background-color: #ffffe0;
}
.org-mode-line {
/* mode-line */
color: #000000;
background-color: #bfbfbf;
}
.org-mode-line-buffer-id {
/* mode-line-buffer-id */
font-weight: bold;
}
.org-mode-line-highlight { }
.org-mode-line-inactive {
/* mode-line-inactive */
color: #333333;
background-color: #e5e5e5;
}
.org-mouse {
/* mouse */
background-color: #000000;
}
.org-negation-char { }
.org-next-error {
/* next-error */
background-color: #eedc82;
}
.org-nobreak-space {
/* nobreak-space */
color: #a52a2a;
text-decoration: underline;
}
.org-org-agenda-date {
/* org-agenda-date */
color: #0000ff;
}
.org-org-agenda-date-weekend {
/* org-agenda-date-weekend */
color: #0000ff;
font-weight: bold;
}
.org-org-agenda-restriction-lock {
/* org-agenda-restriction-lock */
background-color: #ffff00;
}
.org-org-agenda-structure {
/* org-agenda-structure */
color: #0000ff;
}
.org-org-archived {
/* org-archived */
color: #7f7f7f;
}
.org-org-code {
/* org-code */
color: #7f7f7f;
}
.org-org-column {
/* org-column */
background-color: #e5e5e5;
}
.org-org-column-title {
/* org-column-title */
background-color: #e5e5e5;
font-weight: bold;
text-decoration: underline;
}
.org-org-date {
/* org-date */
color: #a020f0;
text-decoration: underline;
}
.org-org-done {
/* org-done */
color: #228b22;
font-weight: bold;
}
.org-org-drawer {
/* org-drawer */
color: #0000ff;
}
.org-org-ellipsis {
/* org-ellipsis */
color: #b8860b;
text-decoration: underline;
}
.org-org-formula {
/* org-formula */
color: #b22222;
}
.org-org-headline-done {
/* org-headline-done */
color: #bc8f8f;
}
.org-org-hide {
/* org-hide */
color: #e5e5e5;
}
.org-org-latex-and-export-specials {
/* org-latex-and-export-specials */
color: #8b4513;
}
.org-org-level-1 {
/* org-level-1 */
color: #0000ff;
}
.org-org-level-2 {
/* org-level-2 */
color: #b8860b;
}
.org-org-level-3 {
/* org-level-3 */
color: #a020f0;
}
.org-org-level-4 {
/* org-level-4 */
color: #b22222;
}
.org-org-level-5 {
/* org-level-5 */
color: #228b22;
}
.org-org-level-6 {
/* org-level-6 */
color: #5f9ea0;
}
.org-org-level-7 {
/* org-level-7 */
color: #da70d6;
}
.org-org-level-8 {
/* org-level-8 */
color: #bc8f8f;
}
.org-org-link {
/* org-link */
color: #a020f0;
text-decoration: underline;
}
.org-org-property-value { }
.org-org-scheduled-previously {
/* org-scheduled-previously */
color: #b22222;
}
.org-org-scheduled-today {
/* org-scheduled-today */
color: #006400;
}
.org-org-sexp-date {
/* org-sexp-date */
color: #a020f0;
}
.org-org-special-keyword {
/* org-special-keyword */
color: #bc8f8f;
}
.org-org-table {
/* org-table */
color: #0000ff;
}
.org-org-tag {
/* org-tag */
font-weight: bold;
}
.org-org-target {
/* org-target */
text-decoration: underline;
}
.org-org-time-grid {
/* org-time-grid */
color: #b8860b;
}
.org-org-todo {
/* org-todo */
color: #ff0000;
}
.org-org-upcoming-deadline {
/* org-upcoming-deadline */
color: #b22222;
}
.org-org-verbatim {
/* org-verbatim */
color: #7f7f7f;
text-decoration: underline;
}
.org-org-warning {
/* org-warning */
color: #ff0000;
font-weight: bold;
}
.org-outline-1 {
/* outline-1 */
color: #0000ff;
}
.org-outline-2 {
/* outline-2 */
color: #b8860b;
}
.org-outline-3 {
/* outline-3 */
color: #a020f0;
}
.org-outline-4 {
/* outline-4 */
color: #b22222;
}
.org-outline-5 {
/* outline-5 */
color: #228b22;
}
.org-outline-6 {
/* outline-6 */
color: #5f9ea0;
}
.org-outline-7 {
/* outline-7 */
color: #da70d6;
}
.org-outline-8 {
/* outline-8 */
color: #bc8f8f;
}
.org-preprocessor {
/* font-lock-preprocessor-face */
color: #da70d6;
}
.org-query-replace {
/* query-replace */
color: #b0e2ff;
background-color: #cd00cd;
}
.org-regexp-grouping-backslash {
/* font-lock-regexp-grouping-backslash */
font-weight: bold;
}
.org-regexp-grouping-construct {
/* font-lock-regexp-grouping-construct */
font-weight: bold;
}
.org-region {
/* region */
background-color: #eedc82;
}
.org-rmail-highlight { }
.org-scroll-bar {
/* scroll-bar */
background-color: #bfbfbf;
}
.org-secondary-selection {
/* secondary-selection */
background-color: #ffff00;
}
.org-shadow {
/* shadow */
color: #7f7f7f;
}
.org-show-paren-match {
/* show-paren-match */
background-color: #40e0d0;
}
.org-show-paren-mismatch {
/* show-paren-mismatch */
color: #ffffff;
background-color: #a020f0;
}
.org-string {
/* font-lock-string-face */
color: #bc8f8f;
}
.org-texinfo-heading {
/* texinfo-heading */
color: #0000ff;
}
.org-tool-bar {
/* tool-bar */
color: #000000;
background-color: #bfbfbf;
}
.org-tooltip {
/* tooltip */
color: #000000;
background-color: #ffffe0;
}
.org-trailing-whitespace {
/* trailing-whitespace */
background-color: #ff0000;
}
.org-type {
/* font-lock-type-face */
color: #228b22;
}
.org-underline {
/* underline */
text-decoration: underline;
}
.org-variable-name {
/* font-lock-variable-name-face */
color: #b8860b;
}
.org-variable-pitch { }
.org-vertical-border { }
.org-warning {
/* font-lock-warning-face */
color: #ff0000;
font-weight: bold;
}
div#postamble {
text-align: right;
font-size: 9pt;
padding-right: 20px;
padding-left: 20px;
padding-top: 0.3em;
padding-bottom: 1em;
margin-top: 80px;
line-height: 50%;
border-top: 1px dotted #333333;
}
div#postamble-extra { font-size: 9pt }
......@@ -5,7 +5,7 @@
<groupId>edu.illinois.cs.cogcomp</groupId>
<artifactId>illinois-srl</artifactId>
<packaging>jar</packaging>
<version>5.0</version>
<version>5.1</version>
<url>http://cogcomp.cs.illinois.edu</url>
......@@ -42,7 +42,7 @@
<dependency>
<groupId>edu.illinois.cs.cogcomp</groupId>
<artifactId>illinois-nlp-pipeline</artifactId>
<version>0.0.9</version>
<version>0.1.2-SNAPSHOT</version>
</dependency>
<dependency>
......@@ -75,13 +75,6 @@
</exclusions>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
......@@ -91,7 +84,7 @@
<dependency>
<groupId>edu.illinois.cs.cogcomp</groupId>
<artifactId>edison</artifactId>
<version>0.7.4</version>
<version>0.7.8</version>
</dependency>
<dependency>
......@@ -100,35 +93,19 @@
<version>0.4.1</version>
</dependency>
<dependency>
<groupId>commons-configuration</groupId>
<artifactId>commons-configuration</artifactId>
<version>1.6</version>
<exclusions>
<exclusion>
<artifactId>commons-digester</artifactId>
<groupId>commons-digester</groupId>
</exclusion>
<exclusion>
<artifactId>commons-beanutils-core</artifactId>
<groupId>commons-beanutils</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>2.4</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.6.1</version>
<optional>true</optional>
</dependency>
</dependencies>
<dependency>
<groupId>org.tartarus</groupId>
<artifactId>snowball</artifactId>
<version>1.0</version>
</dependency>
</dependencies>
<build>
......@@ -174,7 +151,7 @@
<repository>
<id>CogcompSoftware</id>
<name>CogcompSoftware</name>
<url>scp://bilbo.cs.uiuc.edu:/mounts/bilbo/disks/0/www/cogcomp/html/m2repo</url>
<url>scp://bilbo.cs.illinois.edu:/mounts/bilbo/disks/0/www/cogcomp/html/m2repo</url>
</repository>
</distributionManagement>
......
#!/bin/bash
svn up
VERSION=`mvn org.apache.maven.plugins:maven-help-plugin:2.1.1:evaluate -Dexpression=project.version | grep -v 'INFO'`
mvn clean package
mvn dependency:copy-dependencies
bash compileModelsToJar.sh
LIBDIR=curator-release/lib
rm -rdf $LIBDIR/*
cp target/illinoisSRL-$VERSION.jar $LIBDIR
cp target/illinoisSRL-verb-models-$VERSION.jar $LIBDIR
cp target/illinoisSRL-nom-models-$VERSION.jar $LIBDIR
cp target/dependency/JLIS-core-0.5.jar $LIBDIR
cp target/dependency/JLIS-multiclass-0.5.jar $LIBDIR
cp target/dependency/LBJ-2.8.2.jar $LIBDIR
cp target/dependency/LBJLibrary-2.8.2.jar $LIBDIR
cp target/dependency/brown-clusters-1.0.jar $LIBDIR
cp target/dependency/commons-codec-1.8.jar $LIBDIR
cp target/dependency/commons-collections-3.2.1.jar $LIBDIR
cp target/dependency/commons-configuration-1.6.jar $LIBDIR
cp target/dependency/commons-lang-2.5.jar $LIBDIR
cp target/dependency/commons-logging-1.1.1.jar $LIBDIR
cp target/dependency/coreUtilities-0.1.8.jar $LIBDIR
cp target/dependency/curator-interfaces-0.7.jar $LIBDIR
cp target/dependency/edison-0.5.jar $LIBDIR
cp target/dependency/gson-2.2.4.jar $LIBDIR
cp target/dependency/httpclient-4.1.2.jar $LIBDIR
cp target/dependency/httpcore-4.1.3.jar $LIBDIR
cp target/dependency/inference-0.3.jar $LIBDIR
cp target/dependency/jgrapht-0.8.3.jar $LIBDIR
cp target/dependency/jwnl-1.4_rc3.jar $LIBDIR
cp target/dependency/libthrift-0.8.0.jar $LIBDIR
cp target/dependency/logback-classic-0.9.28.jar $LIBDIR
cp target/dependency/logback-core-0.9.28.jar $LIBDIR
cp target/dependency/slf4j-api-1.6.1.jar $LIBDIR
cp target/dependency/snowball-1.0.jar $LIBDIR
cp target/dependency/trove4j-3.0.3.jar $LIBDIR
cp target/dependency/verb-nom-data-1.0.jar $LIBDIR
* ILP inference
Number of Sentences : 2416
Number of Propositions : 5267
Percentage of perfect props : 51.26
corr. excess missed prec. rec. F1
------------------------------------------------------------
Overall 10391 2435 3686 81.02 73.82 77.25
----------
A0 3035 371 528 89.11 85.18 87.10
A1 3740 813 1187 82.14 75.91 78.90
A2 668 250 442 72.77 60.18 65.88
A3 92 33 81 73.60 53.18 61.74
A4 71 21 31 77.17 69.61 73.20
A5 3 0 2 100.00 60.00 75.00
AM 0 1 0 0.00 0.00 0.00
AM-ADV 253 174 253 59.25 50.00 54.23
AM-CAU 31 22 42 58.49 42.47 49.21
AM-DIR 31 22 54 58.49 36.47 44.93
AM-DIS 230 65 90 77.97 71.88 74.80
AM-EXT 14 11 18 56.00 43.75 49.12
AM-LOC 181 131 182 58.01 49.86 53.63
AM-MNR 183 124 161 59.61 53.20 56.22
AM-MOD 523 34 28 93.90 94.92 94.40
AM-NEG 207 18 23 92.00 90.00 90.99
AM-PNC 40 44 75 47.62 34.78 40.20
AM-PRD 1 4 4 20.00 20.00 20.00
AM-REC 0 0 2 0.00 0.00 0.00
AM-TMP 768 235 319 76.57 70.65 73.49
R-A0 177 25 47 87.62 79.02 83.10
R-A1 107 28 49 79.26 68.59 73.54
R-A2 5 2 11 71.43 31.25 43.48
R-A3 0 0 1 0.00 0.00 0.00
R-A4 0 0 1 0.00 0.00 0.00
R-AM-ADV 0 0 2 0.00 0.00 0.00
R-AM-CAU 0 0 4 0.00 0.00 0.00
R-AM-EXT 0 0 1 0.00 0.00 0.00
R-AM-LOC 6 0 15 100.00 28.57 44.44
R-AM-MNR 0 1 6 0.00 0.00 0.00
R-AM-TMP 25 6 27 80.65 48.08 60.24
------------------------------------------------------------
V 5259 8 8 99.85 99.85 99.85
------------------------------------------------------------
* Beam search (beam size = 10)
Number of Sentences : 2416
Number of Propositions : 5267
Percentage of perfect props : 51.15
corr. excess missed prec. rec. F1
------------------------------------------------------------
Overall 10415 2501 3662 80.64 73.99 77.17
----------
A0 3054 424 509 87.81 85.71 86.75
A1 3748 862 1179 81.30 76.07 78.60
A2 670 267 440 71.50 60.36 65.46
A3 94 36 79 72.31 54.34 62.05
A4 71 23 31 75.53 69.61 72.45
A5 3 0 2 100.00 60.00 75.00
AM-ADV 254 180 252 58.53 50.20 54.04
AM-CAU 31 18 42 63.27 42.47 50.82
AM-DIR 31 21 54 59.62 36.47 45.26
AM-DIS 230 64 90 78.23 71.88 74.92
AM-EXT 14 7 18 66.67 43.75 52.83
AM-LOC 182 127 181 58.90 50.14 54.17
AM-MNR 183 125 161 59.42 53.20 56.13
AM-MOD 523 14 28 97.39 94.92 96.14
AM-NEG 207 12 23 94.52 90.00 92.20
AM-PNC 41 40 74 50.62 35.65 41.84
AM-PRD 1 3 4 25.00 20.00 22.22
AM-REC 0 1 2 0.00 0.00 0.00
AM-TMP 761 224 326 77.26 70.01 73.46
R-A0 178 20 46 89.90 79.46 84.36
R-A1 103 21 53 83.06 66.03 73.57
R-A2 5 4 11 55.56 31.25 40.00
R-A3 0 0 1 0.00 0.00 0.00
R-A4 0 0 1 0.00 0.00 0.00
R-AM-ADV 0 0 2 0.00 0.00 0.00
R-AM-CAU 0 0 4 0.00 0.00 0.00
R-AM-EXT 0 0 1 0.00 0.00 0.00
R-AM-LOC 7 0 14 100.00 33.33 50.00
R-AM-MNR 0 1 6 0.00 0.00 0.00
R-AM-PNC 0 1 0 0.00 0.00 0.00
R-AM-TMP 24 6 28 80.00 46.15 58.54
------------------------------------------------------------
V 5259 8 8 99.85 99.85 99.85
------------------------------------------------------------
* Beam search (beam size = 25)
Number of Sentences : 2416
Number of Propositions : 5267
Percentage of perfect props : 51.36
corr. excess missed prec. rec. F1
------------------------------------------------------------
Overall 10413 2469 3664 80.83 73.97 77.25
----------
A0 3051 397 512 88.49 85.63 87.03
A1 3746 834 1181 81.79 76.03 78.81
A2 671 275 439 70.93 60.45 65.27
A3 94 36 79 72.31 54.34 62.05
A4 71 22 31 76.34 69.61 72.82
A5 3 0 2 100.00 60.00 75.00
AM 0 1 0 0.00 0.00 0.00
AM-ADV 254 185 252 57.86 50.20 53.76
AM-CAU 31 18 42 63.27 42.47 50.82
AM-DIR 31 22 54 58.49 36.47 44.93
AM-DIS 230 64 90 78.23 71.88 74.92
AM-EXT 14 6 18 70.00 43.75 53.85
AM-LOC 181 126 182 58.96 49.86 54.03
AM-MNR 184 127 160 59.16 53.49 56.18
AM-MOD 523 15 28 97.21 94.92 96.05
AM-NEG 207 13 23 94.09 90.00 92.00
AM-PNC 40 41 75 49.38 34.78 40.82
AM-PRD 1 4 4 20.00 20.00 20.00
AM-REC 0 1 2 0.00 0.00 0.00
AM-TMP 764 229 323 76.94 70.29 73.46
R-A0 177 17 47 91.24 79.02 84.69
R-A1 103 25 53 80.47 66.03 72.54
R-A2 6 4 10 60.00 37.50 46.15
R-A3 0 0 1 0.00 0.00 0.00
R-A4 0 0 1 0.00 0.00 0.00
R-AM-ADV 0 0 2 0.00 0.00 0.00
R-AM-CAU 0 0 4 0.00 0.00 0.00
R-AM-EXT 0 0 1 0.00 0.00 0.00
R-AM-LOC 7 0 14 100.00 33.33 50.00
R-AM-MNR 0 1 6 0.00 0.00 0.00
R-AM-TMP 24 6 28 80.00 46.15 58.54
------------------------------------------------------------
V 5259 8 8 99.85 99.85 99.85
------------------------------------------------------------
| Label | TotalGold | TotalPredicted | CorrectPrediction | Precision | Recall | F1 |
|--------------------+-----------+----------------+-------------------+-----------+--------+-------|
| A0 | 1760 | 1807 | 1609 | 0.89 | 0.914 | 0.902 |
| A1 | 2061 | 2271 | 1888 | 0.831 | 0.916 | 0.872 |
| A2 | 436 | 475 | 357 | 0.752 | 0.819 | 0.784 |
| A3 | 70 | 62 | 50 | 0.806 | 0.714 | 0.758 |
| A4 | 40 | 43 | 33 | 0.767 | 0.825 | 0.795 |
| A5 | 1 | 1 | 1 | 1 | 1 | 1 |
| AM-ADV | 215 | 207 | 134 | 0.647 | 0.623 | 0.635 |
| AM-CAU | 29 | 24 | 19 | 0.792 | 0.655 | 0.717 |
| AM-DIR | 25 | 21 | 10 | 0.476 | 0.4 | 0.435 |
| AM-DIS | 133 | 141 | 112 | 0.794 | 0.842 | 0.818 |
| AM-EXT | 14 | 9 | 7 | 0.778 | 0.5 | 0.609 |
| AM-LOC | 143 | 157 | 96 | 0.611 | 0.671 | 0.64 |
| AM-MNR | 133 | 144 | 89 | 0.618 | 0.669 | 0.643 |
| AM-MOD | 257 | 260 | 255 | 0.981 | 0.992 | 0.986 |
| AM-NEG | 118 | 122 | 117 | 0.959 | 0.992 | 0.975 |
| AM-PNC | 42 | 44 | 26 | 0.591 | 0.619 | 0.605 |
| AM-PRD | 0 | 1 | 0 | 0 | 0 | 0 |
| AM-REC | 2 | 0 | 0 | 0 | 0 | 0 |
| AM-TMP | 454 | 491 | 391 | 0.796 | 0.861 | 0.828 |
| C-A0 | 3 | 3 | 3 | 1 | 1 | 1 |
| C-A1 | 75 | 68 | 51 | 0.75 | 0.68 | 0.713 |
| C-A2 | 1 | 0 | 0 | 0 | 0 | 0 |
| C-A3 | 1 | 0 | 0 | 0 | 0 | 0 |
| C-A4 | 0 | 1 | 0 | 0 | 0 | 0 |
| C-AM-ADV | 0 | 2 | 0 | 0 | 0 | 0 |
| C-AM-CAU | 0 | 24 | 0 | 0 | 0 | 0 |
| C-AM-DIR | 0 | 1 | 0 | 0 | 0 | 0 |
| C-AM-EXT | 0 | 1 | 0 | 0 | 0 | 0 |
| C-AM-LOC | 0 | 2 | 0 | 0 | 0 | 0 |
| C-AM-NEG | 0 | 31 | 0 | 0 | 0 | 0 |
| C-AM-PNC | 0 | 13 | 0 | 0 | 0 | 0 |
| C-V | 2 | 1 | 0 | 0 | 0 | 0 |
| R-A0 | 102 | 104 | 92 | 0.885 | 0.902 | 0.893 |
| R-A1 | 71 | 73 | 57 | 0.781 | 0.803 | 0.792 |
| R-A2 | 7 | 2 | 1 | 0.5 | 0.143 | 0.222 |
| R-AA | 0 | 8 | 0 | 0 | 0 | 0 |
| R-AM-ADV | 1 | 0 | 0 | 0 | 0 | 0 |
| R-AM-CAU | 2 | 1 | 1 | 1 | 0.5 | 0.667 |
| R-AM-DIR | 0 | 7 | 0 | 0 | 0 | 0 |
| R-AM-EXT | 1 | 0 | 0 | 0 | 0 | 0 |
| R-AM-LOC | 13 | 8 | 8 | 1 | 0.615 | 0.762 |
| R-AM-MNR | 3 | 1 | 0 | 0 | 0 | 0 |
| R-AM-TMP | 28 | 25 | 20 | 0.8 | 0.714 | 0.755 |
|--------------------+-----------+----------------+-------------------+-----------+--------+-------|
| null | 9948 | 9535 | 9130 | 0.958 | 0.918 | 0.937 |
|--------------------+-----------+----------------+-------------------+-----------+--------+-------|
| All (without null) | 6243 | 6656 | 5427 | 0.815 | 0.869 | 0.841 |
|--------------------+-----------+----------------+-------------------+-----------+--------+-------|
| All | 16191 | 16191 | 14557 | 0.899 | 0.899 | 0.899 |
Number of Sentences : 2416
Number of Propositions : 5267
Percentage of perfect props : 50.33
corr. excess missed prec. rec. F1
------------------------------------------------------------
Overall 10332 2976 3745 77.64 73.40 75.46
----------
A0 2995 519 568 85.23 84.06 84.64
A1 3624 967 1303 78.94 73.55 76.15
A2 666 287 444 69.88 60.00 64.57
A3 96 45 77 68.09 55.49 61.15
A4 65 25 37 72.22 63.73 67.71
A5 0 0 5 0.00 0.00 0.00
AM-ADV 275 215 231 56.12 54.35 55.22
AM-CAU 36 28 37 56.25 49.32 52.55
AM-DIR 31 35 54 46.97 36.47 41.06
AM-DIS 245 96 75 71.85 76.56 74.13
AM-EXT 15 14 17 51.72 46.88 49.18
AM-LOC 201 158 162 55.99 55.37 55.68
AM-MNR 183 137 161 57.19 53.20 55.12
AM-MOD 516 17 35 96.81 93.65 95.20
AM-NEG 216 17 14 92.70 93.91 93.30
AM-PNC 44 60 71 42.31 38.26 40.18
AM-PRD 0 0 5 0.00 0.00 0.00
AM-REC 0 0 2 0.00 0.00 0.00
AM-TMP 789 271 298 74.43 72.59 73.50
R-A0 189 27 35 87.50 84.38 85.91
R-A1 102 45 54 69.39 65.38 67.33
R-A2 5 2 11 71.43 31.25 43.48
R-A3 0 0 1 0.00 0.00 0.00
R-A4 0 0 1 0.00 0.00 0.00
R-AM-ADV 0 0 2 0.00 0.00 0.00
R-AM-CAU 0 0 4 0.00 0.00 0.00
R-AM-EXT 0 0 1 0.00 0.00 0.00
R-AM-LOC 13 1 8 92.86 61.90 74.29
R-AM-MNR 0 0 6 0.00 0.00 0.00
R-AM-TMP 26 10 26 72.22 50.00 59.09
------------------------------------------------------------
V 5066 10 201 99.80 96.18 97.96
------------------------------------------------------------
VerbArgumentIdentifier: Parameter sets:
VerbArgumentIdentifier: 1:
VerbArgumentIdentifier: 2: learningRate = 0.2
VerbArgumentIdentifier: 3: learningRate = 0.30000000000000004
VerbArgumentIdentifier: 4: learningRate = 0.4
VerbArgumentIdentifier: 5: learningRate = 0.5
VerbArgumentIdentifier: 6: thickness = 0.5
VerbArgumentIdentifier: 7: learningRate = 0.2, thickness = 0.5
VerbArgumentIdentifier: 8: learningRate = 0.30000000000000004, thickness = 0.5
VerbArgumentIdentifier: 9: learningRate = 0.4, thickness = 0.5
VerbArgumentIdentifier: 10: learningRate = 0.5, thickness = 0.5
VerbArgumentIdentifier: 11: thickness = 1.0
VerbArgumentIdentifier: 12: learningRate = 0.2, thickness = 1.0
VerbArgumentIdentifier: 13: learningRate = 0.30000000000000004, thickness = 1.0
VerbArgumentIdentifier: 14: learningRate = 0.4, thickness = 1.0
VerbArgumentIdentifier: 15: learningRate = 0.5, thickness = 1.0
VerbArgumentIdentifier: 16: thickness = 1.5
VerbArgumentIdentifier: 17: learningRate = 0.2, thickness = 1.5
VerbArgumentIdentifier: 18: learningRate = 0.30000000000000004, thickness = 1.5
VerbArgumentIdentifier: 19: learningRate = 0.4, thickness = 1.5
VerbArgumentIdentifier: 20: learningRate = 0.5, thickness = 1.5
VerbArgumentIdentifier: 21: thickness = 2.0
VerbArgumentIdentifier: 22: learningRate = 0.2, thickness = 2.0
VerbArgumentIdentifier: 23: learningRate = 0.30000000000000004, thickness = 2.0
VerbArgumentIdentifier: 24: learningRate = 0.4, thickness = 2.0
VerbArgumentIdentifier: 25: learningRate = 0.5, thickness = 2.0
VerbArgumentIdentifier: 26: thickness = 2.5
VerbArgumentIdentifier: 27: learningRate = 0.2, thickness = 2.5
VerbArgumentIdentifier: 28: learningRate = 0.30000000000000004, thickness = 2.5
VerbArgumentIdentifier: 29: learningRate = 0.4, thickness = 2.5
VerbArgumentIdentifier: 30: learningRate = 0.5, thickness = 2.5
VerbArgumentIdentifier: 31: thickness = 3.0
VerbArgumentIdentifier: 32: learningRate = 0.2, thickness = 3.0
VerbArgumentIdentifier: 33: learningRate = 0.30000000000000004, thickness = 3.0
VerbArgumentIdentifier: 34: learningRate = 0.4, thickness = 3.0
VerbArgumentIdentifier: 35: learningRate = 0.5, thickness = 3.0
VerbArgumentIdentifier: 36: thickness = 3.5
VerbArgumentIdentifier: 37: learningRate = 0.2, thickness = 3.5
VerbArgumentIdentifier: 38: learningRate = 0.30000000000000004, thickness = 3.5
VerbArgumentIdentifier: 39: learningRate = 0.4, thickness = 3.5
VerbArgumentIdentifier: 40: learningRate = 0.5, thickness = 3.5
VerbArgumentIdentifier: 41: thickness = 4.0
VerbArgumentIdentifier: 42: learningRate = 0.2, thickness = 4.0
VerbArgumentIdentifier: 43: learningRate = 0.30000000000000004, thickness = 4.0
VerbArgumentIdentifier: 44: learningRate = 0.4, thickness = 4.0
VerbArgumentIdentifier: 45: learningRate = 0.5, thickness = 4.0
VerbArgumentIdentifier: Set Accuracy +/- Rounds
VerbArgumentIdentifier: -------------------------
VerbArgumentIdentifier: 1 97.149 0.047 5
VerbArgumentIdentifier: 2 97.148 0.071 5
VerbArgumentIdentifier: 3 97.150 0.035 5
VerbArgumentIdentifier: 4 97.146 0.045 5
VerbArgumentIdentifier: 5 97.147 0.072 5
VerbArgumentIdentifier: 6 97.154 0.043 5
VerbArgumentIdentifier: 7 97.136 0.053 10
VerbArgumentIdentifier: 8 97.147 0.032 5
VerbArgumentIdentifier: 9 97.154 0.027 5
VerbArgumentIdentifier: 10 97.136 0.023 5
VerbArgumentIdentifier: 11 97.161 0.069 5
VerbArgumentIdentifier: 12 97.152 0.031 5
VerbArgumentIdentifier: 13 97.151 0.048 5
VerbArgumentIdentifier: 14 97.145 0.029 5
VerbArgumentIdentifier: 15 97.145 0.045 5
VerbArgumentIdentifier: 16 97.165 0.028 5
VerbArgumentIdentifier: 17 97.151 0.047 5
VerbArgumentIdentifier: 18 97.149 0.029 5
VerbArgumentIdentifier: 19 97.152 0.049 5
VerbArgumentIdentifier: 20 97.163 0.034 5
VerbArgumentIdentifier: 21 97.160 0.032 5
VerbArgumentIdentifier: 22 97.148 0.051 5
VerbArgumentIdentifier: 23 97.150 0.020 5
VerbArgumentIdentifier: 24 97.151 0.062 5
VerbArgumentIdentifier: 25 97.147 0.024 5
VerbArgumentIdentifier: 26 97.165 0.045 5
VerbArgumentIdentifier: 27 97.161 0.049 5
VerbArgumentIdentifier: 28 97.157 0.032 5
VerbArgumentIdentifier: 29 97.168 0.020 5
VerbArgumentIdentifier: 30 97.149 0.063 5
VerbArgumentIdentifier: 31 97.172 0.045 5
VerbArgumentIdentifier: 32 97.168 0.065 5
VerbArgumentIdentifier: 33 97.160 0.049 5
VerbArgumentIdentifier: 34 97.162 0.022 5
VerbArgumentIdentifier: 35 97.156 0.038 10
VerbArgumentIdentifier: 36 97.180 0.045 5
VerbArgumentIdentifier: 37 97.160 0.046 5
VerbArgumentIdentifier: 38 97.163 0.050 10
VerbArgumentIdentifier: 39 97.159 0.037 5
VerbArgumentIdentifier: 40 97.163 0.045 5
VerbArgumentIdentifier: 41 97.171 0.044 5
VerbArgumentIdentifier: 42 97.160 0.045 5
VerbArgumentIdentifier: 43 97.155 0.044 5
VerbArgumentIdentifier: 44 97.157 0.015 5
VerbArgumentIdentifier: 45 97.153 0.061 5
VerbArgumentIdentifier: ----
VerbArgumentIdentifier: Best Accuracy: 97.18015
VerbArgumentIdentifier: with thickness = 3.5
VerbArgumentIdentifier: and 5 rounds
| (Threshold, beta) | totalGold | totalPredicted | correct | P | R | F1 | F2 |
|--------------------------------------------+-----------+----------------+---------+-------+-------+-------+-------|
| (0.1, 0.1) | 12409 | 14988 | 12109 | 80.79 | 97.58 | 88.4 | 93.69 |
| (0.2, 0.1) | 12409 | 14373 | 11982 | 83.36 | 96.56 | 89.48 | 93.6 |
| (0.1, 0.2) | 12409 | 14187 | 11929 | 84.08 | 96.13 | 89.71 | 93.45 |
| (0.30000000000000004, 0.1) | 12409 | 14037 | 11885 | 84.67 | 95.78 | 89.88 | 93.33 |
| (0.1, 0.30000000000000004) | 12409 | 13967 | 11864 | 84.94 | 95.61 | 89.96 | 93.27 |
| (0.2, 0.2) | 12409 | 13946 | 11856 | 85.01 | 95.54 | 89.97 | 93.23 |
| (0.1, 0.4) | 12409 | 13853 | 11830 | 85.4 | 95.33 | 90.09 | 93.17 |
| (0.2, 0.30000000000000004) | 12409 | 13793 | 11806 | 85.59 | 95.14 | 90.12 | 93.06 |
| (0.1, 0.5) | 12409 | 13781 | 11801 | 85.63 | 95.1 | 90.12 | 93.04 |
| (0.30000000000000004, 0.2) | 12409 | 13772 | 11797 | 85.66 | 95.07 | 90.12 | 93.02 |
| (0.4, 0.1) | 12409 | 13758 | 11793 | 85.72 | 95.04 | 90.14 | 93.01 |
| (0.1, 0.6) | 12409 | 13728 | 11780 | 85.81 | 94.93 | 90.14 | 92.95 |
| (0.2, 0.4) | 12409 | 13715 | 11775 | 85.85 | 94.89 | 90.15 | 92.93 |
| (0.1, 0.7) | 12409 | 13698 | 11766 | 85.9 | 94.82 | 90.14 | 92.89 |
| (0.2, 0.5) | 12409 | 13674 | 11755 | 85.97 | 94.73 | 90.14 | 92.84 |
| (0.1, 0.7999999999999999) | 12409 | 13674 | 11755 | 85.97 | 94.73 | 90.14 | 92.84 |
| (0.30000000000000004, 0.30000000000000004) | 12409 | 13676 | 11756 | 85.96 | 94.74 | 90.14 | 92.84 |
| (0.1, 0.8999999999999999) | 12409 | 13658 | 11750 | 86.03 | 94.69 | 90.15 | 92.82 |
| (0.2, 0.6) | 12409 | 13650 | 11746 | 86.05 | 94.66 | 90.15 | 92.8 |
| (0.1, 0.9999999999999999) | 12409 | 13644 | 11741 | 86.05 | 94.62 | 90.13 | 92.77 |
| (0.30000000000000004, 0.4) | 12409 | 13638 | 11739 | 86.08 | 94.6 | 90.14 | 92.76 |
| (0.4, 0.2) | 12409 | 13631 | 11736 | 86.1 | 94.58 | 90.14 | 92.75 |
| (0.2, 0.7) | 12409 | 13629 | 11734 | 86.1 | 94.56 | 90.13 | 92.74 |
| (0.30000000000000004, 0.5) | 12409 | 13608 | 11728 | 86.18 | 94.51 | 90.16 | 92.72 |
| (0.2, 0.7999999999999999) | 12409 | 13614 | 11729 | 86.15 | 94.52 | 90.14 | 92.72 |
| (0.2, 0.8999999999999999) | 12409 | 13596 | 11722 | 86.22 | 94.46 | 90.15 | 92.69 |
| (0.30000000000000004, 0.6) | 12409 | 13587 | 11719 | 86.25 | 94.44 | 90.16 | 92.68 |
| (0.2, 0.9999999999999999) | 12409 | 13585 | 11719 | 86.26 | 94.44 | 90.17 | 92.68 |
| (0.4, 0.30000000000000004) | 12409 | 13582 | 11718 | 86.28 | 94.43 | 90.17 | 92.68 |
| (0.30000000000000004, 0.7) | 12409 | 13572 | 11711 | 86.29 | 94.38 | 90.15 | 92.64 |
| (0.30000000000000004, 0.7999999999999999) | 12409 | 13561 | 11706 | 86.32 | 94.33 | 90.15 | 92.62 |
| (0.4, 0.4) | 12409 | 13558 | 11704 | 86.33 | 94.32 | 90.15 | 92.6 |
| (0.30000000000000004, 0.8999999999999999) | 12409 | 13556 | 11703 | 86.33 | 94.31 | 90.14 | 92.6 |
| (0.30000000000000004, 0.9999999999999999) | 12409 | 13548 | 11701 | 86.37 | 94.29 | 90.16 | 92.59 |
| (0.4, 0.5) | 12409 | 13545 | 11699 | 86.37 | 94.28 | 90.15 | 92.58 |
| (0.4, 0.6) | 12409 | 13536 | 11695 | 86.4 | 94.25 | 90.15 | 92.56 |
| (0.4, 0.7) | 12409 | 13532 | 11693 | 86.41 | 94.23 | 90.15 | 92.55 |
| (0.4, 0.7999999999999999) | 12409 | 13528 | 11691 | 86.42 | 94.21 | 90.15 | 92.54 |
| (0.4, 0.8999999999999999) | 12409 | 13526 | 11689 | 86.42 | 94.2 | 90.14 | 92.53 |
| (0.4, 0.9999999999999999) | 12409 | 13522 | 11688 | 86.44 | 94.19 | 90.15 | 92.53 |
| (0.5, 0.8999999999999999) | 12409 | 13500 | 11672 | 86.46 | 94.06 | 90.1 | 92.44 |
| (0.5, 0.5) | 12409 | 13500 | 11672 | 86.46 | 94.06 | 90.1 | 92.44 |
| (0.5, 0.1) | 12409 | 13500 | 11672 | 86.46 | 94.06 | 90.1 | 92.44 |
| (0.5, 0.6) | 12409 | 13500 | 11672 | 86.46 | 94.06 | 90.1 | 92.44 |
| (0.5, 0.2) | 12409 | 13500 | 11672 | 86.46 | 94.06 | 90.1 | 92.44 |
| (0.5, 0.4) | 12409 | 13500 | 11672 | 86.46 | 94.06 | 90.1 | 92.44 |
| (0.5, 0.30000000000000004) | 12409 | 13500 | 11672 | 86.46 | 94.06 | 90.1 | 92.44 |
| (0.5, 0.7999999999999999) | 12409 | 13500 | 11672 | 86.46 | 94.06 | 90.1 | 92.44 |
| (0.5, 0.7) | 12409 | 13500 | 11672 | 86.46 | 94.06 | 90.1 | 92.44 |
| (0.5, 0.9999999999999999) | 12409 | 13500 | 11672 | 86.46 | 94.06 | 90.1 | 92.44 |
| (0.6, 0.9999999999999999) | 12409 | 13468 | 11653 | 86.52 | 93.91 | 90.06 | 92.33 |
| (0.6, 0.8999999999999999) | 12409 | 13463 | 11650 | 86.53 | 93.88 | 90.06 | 92.32 |
| (0.6, 0.7999999999999999) | 12409 | 13462 | 11649 | 86.53 | 93.88 | 90.05 | 92.31 |
| (0.6, 0.7) | 12409 | 13456 | 11646 | 86.55 | 93.85 | 90.05 | 92.29 |
| (0.6, 0.6) | 12409 | 13444 | 11642 | 86.6 | 93.82 | 90.06 | 92.28 |
| (0.6, 0.5) | 12409 | 13440 | 11639 | 86.6 | 93.79 | 90.05 | 92.26 |
| (0.7, 0.7999999999999999) | 12409 | 13424 | 11634 | 86.67 | 93.75 | 90.07 | 92.25 |
| (0.7, 0.8999999999999999) | 12409 | 13434 | 11635 | 86.61 | 93.76 | 90.04 | 92.24 |
| (0.6, 0.4) | 12409 | 13427 | 11634 | 86.65 | 93.75 | 90.06 | 92.24 |
| (0.7, 0.9999999999999999) | 12409 | 13436 | 11636 | 86.6 | 93.77 | 90.04 | 92.24 |
| (0.7, 0.7) | 12409 | 13418 | 11631 | 86.68 | 93.73 | 90.07 | 92.23 |
| (0.6, 0.30000000000000004) | 12409 | 13408 | 11628 | 86.72 | 93.71 | 90.08 | 92.22 |
| (0.7, 0.6) | 12409 | 13404 | 11625 | 86.73 | 93.68 | 90.07 | 92.2 |
| (0.7999999999999999, 0.9999999999999999) | 12409 | 13405 | 11625 | 86.72 | 93.68 | 90.07 | 92.2 |
| (0.7999999999999999, 0.8999999999999999) | 12409 | 13397 | 11623 | 86.76 | 93.67 | 90.08 | 92.2 |
| (0.7999999999999999, 0.7999999999999999) | 12409 | 13376 | 11614 | 86.83 | 93.59 | 90.08 | 92.16 |
| (0.7, 0.5) | 12409 | 13380 | 11614 | 86.8 | 93.59 | 90.07 | 92.15 |
| (0.7999999999999999, 0.7) | 12409 | 13358 | 11603 | 86.86 | 93.5 | 90.06 | 92.1 |
| (0.6, 0.2) | 12409 | 13356 | 11603 | 86.87 | 93.5 | 90.07 | 92.1 |
| (0.7, 0.4) | 12409 | 13349 | 11600 | 86.9 | 93.48 | 90.07 | 92.09 |
| (0.8999999999999999, 0.9999999999999999) | 12409 | 13343 | 11598 | 86.92 | 93.46 | 90.07 | 92.08 |
| (0.7999999999999999, 0.6) | 12409 | 13335 | 11594 | 86.94 | 93.43 | 90.07 | 92.06 |
| (0.8999999999999999, 0.8999999999999999) | 12409 | 13322 | 11586 | 86.97 | 93.37 | 90.05 | 92.01 |
| (0.8999999999999999, 0.7999999999999999) | 12409 | 13293 | 11571 | 87.05 | 93.25 | 90.04 | 91.94 |
| (0.7999999999999999, 0.5) | 12409 | 13291 | 11571 | 87.06 | 93.25 | 90.05 | 91.94 |
| (0.7, 0.30000000000000004) | 12409 | 13283 | 11568 | 87.09 | 93.22 | 90.05 | 91.93 |
| (0.8999999999999999, 0.7) | 12409 | 13251 | 11554 | 87.19 | 93.11 | 90.05 | 91.86 |
| (0.7999999999999999, 0.4) | 12409 | 13235 | 11545 | 87.23 | 93.04 | 90.04 | 91.81 |
| (0.8999999999999999, 0.6) | 12409 | 13221 | 11539 | 87.28 | 92.99 | 90.04 | 91.79 |
| (0.6, 0.1) | 12409 | 13184 | 11521 | 87.39 | 92.84 | 90.03 | 91.7 |
| (0.7, 0.2) | 12409 | 13175 | 11518 | 87.42 | 92.82 | 90.04 | 91.69 |
| (0.8999999999999999, 0.5) | 12409 | 13159 | 11510 | 87.47 | 92.76 | 90.03 | 91.65 |
| (0.7999999999999999, 0.30000000000000004) | 12409 | 13132 | 11496 | 87.54 | 92.64 | 90.02 | 91.58 |
| (0.8999999999999999, 0.4) | 12409 | 13061 | 11459 | 87.73 | 92.34 | 89.98 | 91.38 |
| (0.7999999999999999, 0.2) | 12409 | 12957 | 11405 | 88.02 | 91.91 | 89.92 | 91.1 |
| (0.8999999999999999, 0.30000000000000004) | 12409 | 12935 | 11389 | 88.05 | 91.78 | 89.88 | 91.01 |
| (0.7, 0.1) | 12409 | 12861 | 11347 | 88.23 | 91.44 | 89.81 | 90.78 |
| (0.8999999999999999, 0.2) | 12409 | 12623 | 11207 | 88.78 | 90.31 | 89.54 | 90 |
| (0.7999999999999999, 0.1) | 12409 | 12332 | 11019 | 89.35 | 88.8 | 89.07 | 88.91 |
| (0.8999999999999999, 0.1) | 12409 | 11254 | 10271 | 91.27 | 82.77 | 86.81 | 84.34 |
| (0.9999999999999999, 0.9999999999999999) | 12409 | 8282 | 7824 | 94.47 | 63.05 | 75.63 | 67.54 |
| (0.9999999999999999, 0.8999999999999999) | 12409 | 7190 | 6875 | 95.62 | 55.4 | 70.16 | 60.49 |
| (0.0, 0.0) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.4, 0.0) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.1, 0.0) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.0, 0.1) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.0, 0.2) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.0, 0.4) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.0, 0.7999999999999999) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.2, 0.0) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.0, 0.7) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.30000000000000004, 0.0) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.0, 0.6) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.0, 0.8999999999999999) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.0, 0.30000000000000004) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.0, 0.9999999999999999) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.0, 0.5) | 12409 | 53942 | 12409 | 23 | 100 | 37.4 | 59.9 |
| (0.9999999999999999, 0.7999999999999999) | 12409 | 5784 | 5590 | 96.65 | 45.05 | 61.45 | 50.43 |
| (0.9999999999999999, 0.7) | 12409 | 4061 | 3965 | 97.64 | 31.95 | 48.15 | 36.92 |
| (0.9999999999999999, 0.6) | 12409 | 2133 | 2099 | 98.41 | 16.92 | 28.87 | 20.27 |
| (0.9999999999999999, 0.5) | 12409 | 587 | 582 | 99.15 | 4.69 | 8.96 | 5.79 |
| (0.9999999999999999, 0.4) | 12409 | 27 | 26 | 96.3 | 0.21 | 0.42 | 0.26 |
| (0.9999999999999999, 0.30000000000000004) | 12409 | 1 | 1 | 100 | 0.01 | 0.02 | 0.01 |
| (0.6, 0.0) | 12409 | 0 | 0 | 0 | 0 | 0 | 0 |
| (0.5, 0.0) | 12409 | 0 | 0 | 0 | 0 | 0 | 0 |
| (0.7, 0.0) | 12409 | 0 | 0 | 0 | 0 | 0 | 0 |
| (0.9999999999999999, 0.1) | 12409 | 0 | 0 | 0 | 0 | 0 | 0 |
| (0.8999999999999999, 0.0) | 12409 | 0 | 0 | 0 | 0 | 0 | 0 |
| (0.9999999999999999, 0.2) | 12409 | 0 | 0 | 0 | 0 | 0 | 0 |
| (0.9999999999999999, 0.0) | 12409 | 0 | 0 | 0 | 0 | 0 | 0 |
| (0.7999999999999999, 0.0) | 12409 | 0 | 0 | 0 | 0 | 0 | 0 |
Based on F2 measure, recommended (threshold, beta) = (0.1, 0.1)
This diff is collapsed.
File moved
#!/bin/sh
VERSION=`mvn org.apache.maven.plugins:maven-help-plugin:2.1.1:evaluate -Dexpression=project.version | grep -v 'INFO'`
echo "Deploying illinois-srl-models-nom-CHARNIAK-${VERSION}.jar"
mvn deploy:deploy-file \
-Dfile=target/illinois-srl-models-nom-CHARNIAK-${VERSION}.jar \
-DgroupId=edu.illinois.cs.cogcomp \
-DartifactId=illinois-srl \
-Dversion=${VERSION} \
-Dclassifier=models-nom-charniak \
-Dpackaging=jar \
-Durl=scp://bilbo.cs.uiuc.edu:/mounts/bilbo/disks/0/www/cogcomp/html/m2repo \
-DrepositoryId=CogcompSoftware
echo "Deploying illinois-srl-models-nom-STANFORD-${VERSION}.jar"
mvn deploy:deploy-file \
-Dfile=target/illinois-srl-models-nom-STANFORD-${VERSION}.jar \
-DgroupId=edu.illinois.cs.cogcomp \
-DartifactId=illinois-srl \
-Dversion=${VERSION} \
-Dclassifier=models-nom-stanford \
-Dpackaging=jar \
-Durl=scp://bilbo.cs.uiuc.edu:/mounts/bilbo/disks/0/www/cogcomp/html/m2repo \
-DrepositoryId=CogcompSoftware
echo "Deploying illinois-srl-models-verb-CHARNIAK-${VERSION}.jar"
mvn deploy:deploy-file \
-Dfile=target/illinois-srl-models-verb-CHARNIAK-${VERSION}.jar \
-DgroupId=edu.illinois.cs.cogcomp \
-DartifactId=illinois-srl \
-Dversion=${VERSION} \
-Dclassifier=models-verb-charniak \
-Dpackaging=jar \
-Durl=scp://bilbo.cs.uiuc.edu:/mounts/bilbo/disks/0/www/cogcomp/html/m2repo \
-DrepositoryId=CogcompSoftware
echo "Deploying illinois-srl-models-verb-STANFORD-${VERSION}.jar"
mvn deploy:deploy-file \
-Dfile=target/illinois-srl-models-verb-STANFORD-${VERSION}.jar \
-DgroupId=edu.illinois.cs.cogcomp \
-DartifactId=illinois-srl \
-Dversion=${VERSION} \
-Dclassifier=models-verb-stanford \
-Dpackaging=jar \
-Durl=scp://bilbo.cs.uiuc.edu:/mounts/bilbo/disks/0/www/cogcomp/html/m2repo \
-DrepositoryId=CogcompSoftware
\ No newline at end of file
#!/bin/bash
mvn compile
mvn -q dependency:copy-dependencies
CP=target/classes:config:target/dependency/*
MEMORY="-Xmx25g"
OPTIONS="-ea $MEMORY -cp $CP "
MAINCLASS=edu.illinois.cs.cogcomp.srl.SemanticRoleLabeler
time nice java $OPTIONS $MAINCLASS "$@"
#!/bin/bash
svn up
mvn compile
mvn -q dependency:copy-dependencies
......@@ -9,6 +8,6 @@ MEMORY="-Xmx25g"
OPTIONS="-ea $MEMORY -cp $CP "
MAINCLASS=edu.illinois.cs.cogcomp.newsrl.Main
MAINCLASS=edu.illinois.cs.cogcomp.srl.Main
time nice java $OPTIONS $MAINCLASS "$@"
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment