2016-11-14 9 views
0

スタンフォードの字句解析を初期化する次のコード行があります。java.lang.NoSuchMethodError:edu.stanford.nlp.util.Generics.newHashMap()Ljava/util/Map;

lp = LexicalizedParser.loadModel("edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz"); 

Java SEアプリケーションからJava EEアプリケーションにコードを移動した場合に限り、以下の例外が発生します。

Caused by: java.lang.NoSuchMethodError: edu.stanford.nlp.util.Generics.newHashMap()Ljava/util/Map; 
    at edu.stanford.nlp.parser.lexparser.BinaryGrammar.init(BinaryGrammar.java:223) 
    at edu.stanford.nlp.parser.lexparser.BinaryGrammar.readObject(BinaryGrammar.java:211) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 

どのようにして解決できますか?

+0

を少なくともまで、より多くのスタック投稿してください「によって引き起こさを...」 –

+0

私は質問を更新しました@KhalilM –

+0

「NoSuchMethodError」はあなたがNLPの正しいバージョンを持っていることを、私は推測しますことを確認してくださいあなたが使ったことコンパイルして実行すると別のバージョン –

答えて

4
あなたはFAQを参照することができ

http://nlp.stanford.edu/software/corenlp-faq.shtml#nosuchmethoderror

Caused by: java.lang.NoSuchMethodError: edu.stanford.nlp.util.Generics.newHashMap()Ljava/util/Map; 
    at edu.stanford.nlp.pipeline.AnnotatorPool.(AnnotatorPool.java:27) 
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.getDefaultAnnotatorPool(StanfordCoreNLP.java:305) 

then this isn't caused by the shiny new Stanford NLP tools that you've just downloaded. It is because you also have old versions of one or more Stanford NLP tools on your classpath.

The straightforward case is if you have an older version of a Stanford NLP tool. For example, you may still have a version of Stanford NER on your classpath that was released in 2009. In this case, you should upgrade, or at least use matching versions. For any releases from 2011 on, just use tools released at the same time -- such as the most recent version of everything :) -- and they will all be compatible and play nicely together.

The tricky case of this is when people distribute jar files that hide other people's classes inside them. People think this will make it easy for users, since they can distribute one jar that has everything you need, but, in practice, as soon as people are building applications using multiple components, this results in a particular bad form of jar hell. People just shouldn't do this. The only way to check that other jar files do not contain conflicting versions of Stanford tools is to look at what is inside them (for example, with the jar -tf command).

In practice, if you're having problems, the most common cause (in 2013-2014) is that you have ark-tweet-nlp on your classpath. The jar file in their github download hides old versions of many other people's jar files, including Apache commons-codec (v1.4), commons-lang, commons-math, commons-io, Lucene; Twitter commons; Google Guava (v10); Jackson; Berkeley NLP code; Percy Liang's fig; GNU trove; and an outdated version of the Stanford POS tagger (from 2011). You should complain to them for creating you and us grief. But you can then fix the problem by using their jar file from Maven Central. It doesn't have all those other libraries stuffed inside.

+0

を使用する必要があります。@フレデリックアンリさん、問題はスタンフォードセグメンタの古いバージョンでした。ライブラリを削除した後、今すぐ動作します。 –

1

フレデリックは、最善の解決策は、実行時にこのミスマッチを起こしてコンパイルするすべての依存関係を削除して再度ライブラリを追加し、再び釜することであるとして、あなたが」 Mavenの使用RE:

<dependency> 
    <groupId>edu.stanford.nlp</groupId> 
    <artifactId>stanford-corenlp</artifactId> 
    <version>3.6.0</version> 
</dependency>