Deprecated API


Contents
Deprecated Classes
org.apache.hadoop.mapred.InputFormatBase
          replaced by FileInputFormat 
org.apache.hadoop.mapred.PhasedFileSystem
          PhasedFileSystem is no longer used during speculative execution of tasks. 
org.apache.hadoop.io.SetFile.Writer
          pass a Configuration too 
org.apache.hadoop.fs.ShellCommand
          Use Shell instead. 
org.apache.hadoop.util.ShellUtil
          Use Shell.ShellCommandExecutor instead. 
org.apache.hadoop.streaming.StreamLineRecordReader
            
org.apache.hadoop.streaming.StreamOutputFormat
            
org.apache.hadoop.streaming.StreamSequenceRecordReader
            
org.apache.hadoop.util.ToolBase
          This class is depracated. Classes extending ToolBase should rather implement Tool interface, and use ToolRunner for execution functionality. Alternatively, GenericOptionsParser can be used to parse generic arguments related to hadoop framework. 
org.apache.hadoop.io.UTF8
          replaced by Text 
 

Deprecated Methods
org.apache.hadoop.hbase.HTable.abortBatch(long)
          Batch operations are now the default. abortBatch is now implemented by @see HTable.abort(long) 
org.apache.hadoop.hbase.HTable.commitBatch(long)
          Batch operations are now the default. commitBatch(long) is now implemented by @see HTable.commit(long) 
org.apache.hadoop.hbase.HTable.commitBatch(long, long)
          Batch operations are now the default. commitBatch(long, long) is now implemented by @see HTable.commit(long, long) 
org.apache.hadoop.util.CopyFiles.copy(Configuration, String, String, Path, boolean, boolean)
           
org.apache.hadoop.dfs.DataNode.createSocketAddr(String)
           
org.apache.hadoop.conf.Configuration.entries()
          Use Configuration.iterator() instead. 
org.apache.hadoop.conf.Configuration.get(String, Object)
          A side map of Configuration to Object should be used instead. 
org.apache.hadoop.fs.FileSystem.getBlockSize(Path)
          Use getFileStatus() instead 
org.apache.hadoop.record.compiler.generated.SimpleCharStream.getColumn()
            
org.apache.hadoop.hbase.hql.generated.SimpleCharStream.getColumn()
            
org.apache.hadoop.io.SequenceFile.getCompressionType(Configuration)
          Use JobConf.getMapOutputCompressionType() to get SequenceFile.CompressionType for intermediate map-outputs or SequenceFileOutputFormat.getOutputCompressionType(org.apache.hadoop.mapred.JobConf) to get SequenceFile.CompressionType for job-outputs. 
org.apache.hadoop.mapred.Counters.Group.getCounter(String)
            
org.apache.hadoop.mapred.Counters.Group.getCounterNames()
          iterate through the group instead 
org.apache.hadoop.mapred.Counters.Group.getDisplayName(String)
          get the counter directly 
org.apache.hadoop.mapred.FileSplit.getFile()
          Call FileSplit.getPath() instead. 
org.apache.hadoop.mapred.JobConf.getInputKeyClass()
          Call RecordReader.createKey(). 
org.apache.hadoop.mapred.JobConf.getInputValueClass()
          Call RecordReader.createValue(). 
org.apache.hadoop.fs.FileSystem.getLength(Path)
          Use getFileStatus() instead 
org.apache.hadoop.fs.kfs.KosmosFileSystem.getLength(Path)
           
org.apache.hadoop.record.compiler.generated.SimpleCharStream.getLine()
            
org.apache.hadoop.hbase.hql.generated.SimpleCharStream.getLine()
            
org.apache.hadoop.mapred.ClusterStatus.getMaxTasks()
          Use ClusterStatus.getMaxMapTasks() and/or ClusterStatus.getMaxReduceTasks() 
org.apache.hadoop.dfs.DistributedFileSystem.getName()
            
org.apache.hadoop.fs.RawLocalFileSystem.getName()
            
org.apache.hadoop.fs.FilterFileSystem.getName()
          call #getUri() instead. 
org.apache.hadoop.fs.FileSystem.getName()
          call #getUri() instead. 
org.apache.hadoop.fs.kfs.KosmosFileSystem.getName()
           
org.apache.hadoop.fs.FileSystem.getNamed(String, Configuration)
          call #get(URI,Configuration) instead. 
org.apache.hadoop.conf.Configuration.getObject(String)
          A side map of Configuration to Object should be used instead. 
org.apache.hadoop.fs.FileSystem.getReplication(Path)
          Use getFileStatus() instead 
org.apache.hadoop.fs.kfs.KosmosFileSystem.getReplication(Path)
           
org.apache.hadoop.net.NetUtils.getServerAddress(Configuration, String, String, String)
           
org.apache.hadoop.mapred.JobConf.getSpeculativeExecution()
          Use {JobConf.getMapSpeculativeExecution() or JobConf.getReduceSpeculativeExecution() instead. Should speculative execution be used for this job? Defaults to true. 
org.apache.hadoop.mapred.JobClient.getTaskOutputFilter()
           
org.apache.hadoop.ipc.Server.getUserInfo()
          should use UserGroupInformation.getCurrentUGI() 
org.apache.hadoop.fs.FileSystem.globPaths(Path)
           
org.apache.hadoop.fs.FileSystem.globPaths(Path, PathFilter)
           
org.apache.hadoop.fs.FileSystem.isDirectory(Path)
          Use getFileStatus() instead 
org.apache.hadoop.fs.kfs.KosmosFileSystem.isDirectory(Path)
           
org.apache.hadoop.fs.kfs.KosmosFileSystem.isFile(Path)
           
org.apache.hadoop.fs.ChecksumFileSystem.listPaths(Path)
           
org.apache.hadoop.fs.FileSystem.listPaths(Path)
           
org.apache.hadoop.fs.ChecksumFileSystem.listPaths(Path[])
           
org.apache.hadoop.fs.FileSystem.listPaths(Path[])
           
org.apache.hadoop.fs.FileSystem.listPaths(Path[], PathFilter)
           
org.apache.hadoop.fs.FileSystem.listPaths(Path, PathFilter)
           
org.apache.hadoop.fs.RawLocalFileSystem.lock(Path, boolean)
            
org.apache.hadoop.fs.kfs.KosmosFileSystem.lock(Path, boolean)
           
org.apache.hadoop.io.SequenceFile.Reader.next(DataOutputBuffer)
          Call SequenceFile.Reader.nextRaw(DataOutputBuffer,SequenceFile.ValueBytes). 
org.apache.hadoop.mapred.LineRecordReader.readLine(InputStream, OutputStream)
            
org.apache.hadoop.fs.RawLocalFileSystem.release(Path)
            
org.apache.hadoop.fs.kfs.KosmosFileSystem.release(Path)
           
org.apache.hadoop.hbase.HTable.renewLease(long)
          Batch updates are now the default. Consequently this method does nothing. 
org.apache.hadoop.conf.Configuration.set(String, Object)
            
org.apache.hadoop.io.SequenceFile.setCompressionType(Configuration, SequenceFile.CompressionType)
          Use the one of the many SequenceFile.createWriter methods to specify the SequenceFile.CompressionType while creating the SequenceFile or JobConf.setMapOutputCompressionType(org.apache.hadoop.io.SequenceFile.CompressionType) to specify the SequenceFile.CompressionType for intermediate map-outputs or SequenceFileOutputFormat.setOutputCompressionType(org.apache.hadoop.mapred.JobConf, org.apache.hadoop.io.SequenceFile.CompressionType) to specify the SequenceFile.CompressionType for job-outputs. or 
org.apache.hadoop.mapred.JobConf.setInputKeyClass(Class)
          Not used 
org.apache.hadoop.mapred.JobConf.setInputValueClass(Class)
          Not used 
org.apache.hadoop.conf.Configuration.setObject(String, Object)
            
org.apache.hadoop.mapred.JobConf.setSpeculativeExecution(boolean)
          Use JobConf.setMapSpeculativeExecution(boolean) or JobConf.setReduceSpeculativeExecution(boolean) instead. Turn speculative execution on or off for this job. 
org.apache.hadoop.mapred.JobClient.setTaskOutputFilter(JobClient.TaskStatusFilter)
           
org.apache.hadoop.hbase.HTable.startBatchUpdate(Text)
          Batch operations are now the default. startBatchUpdate is now implemented by @see HTable.startUpdate(Text) 
 

Deprecated Constructors
org.apache.hadoop.dfs.ChecksumDistributedFileSystem(InetSocketAddress, Configuration)
            
org.apache.hadoop.dfs.DistributedFileSystem(InetSocketAddress, Configuration)
            
 



Copyright © 2008 The Apache Software Foundation