CSDN复制代码变成一行的问题

awt_32 2018-11-08 10:17:57
为什么现在复制个代码出来都变成一行了??? 还要手动格式化 ??????


例如:https://blog.csdn.net/plei_yue/article/details/79362372



复制出来的代码如下:




#Matser的ip地址 redis.hostName=192.168.177.128 #端口号 redis.port=6382 #如果有密码 redis.password= #客户端超时时间单位是毫秒 默认是2000 redis.timeout=10000 #最大空闲数 redis.maxIdle=300 #连接池的最大数据库连接数。设为0表示无限制,如果是jedis 2.4以后用redis.maxTotal #redis.maxActive=600 #控制一个pool可分配多少个jedis实例,用来替换上面的redis.maxActive,如果是jedis 2.4以后用该属性 redis.maxTotal=1000 #最大建立连接等待时间。如果超过此时间将接到异常。设为-1表示无限制。 redis.maxWaitMillis=1000 #连接的最小空闲时间 默认1800000毫秒(30分钟) redis.minEvictableIdleTimeMillis=300000 #每次释放连接的最大数目,默认3 redis.numTestsPerEvictionRun=1024 #逐出扫描的时间间隔(毫秒) 如果为负数,则不运行逐出线程, 默认-1 redis.timeBetweenEvictionRunsMillis=30000 #是否在从池中取出连接前进行检验,如果检验失败,则从池中去除连接并尝试取出另一个 redis.testOnBorrow=true #在空闲时检查有效性, 默认false redis.testWhileIdle=true #redis集群配置 spring.redis.cluster.nodes=192.168.177.128:7001,192.168.177.128:7002,192.168.177.128:7003,192.168.177.128:7004,192.168.177.128:7005,192.168.177.128:7006 spring.redis.cluster.max-redirects=3 #哨兵模式 #redis.sentinel.host1=192.168.177.128 #redis.sentinel.port1=26379 #redis.sentinel.host2=172.20.1.231 #redis.sentinel.port2=26379
---------------------
作者:w奔跑的蜗牛
来源:CSDN
原文:https://blog.csdn.net/plei_yue/article/details/79362372
版权声明:本文为博主原创文章,转载请附上博文链接!






搞什么飞机~~~~~~~~~~~~~~~~~~~
...全文
2987 21 打赏 收藏 转发到动态 举报
写回复
用AI写文章
21 条回复
切换为时间正序
请发表友善的回复…
发表回复
  • 打赏
  • 举报
回复
建议您使用谷歌浏览器,点击这里,复制代码
万宇宙 2020-05-07
  • 打赏
  • 举报
回复
引用 7 楼 awt_32 的回复:
一个星期过去了,还没有改好
  两年过去了,还没有改好
wiki_Li 2019-07-06
  • 打赏
  • 举报
回复
抱歉,我也不会,我上面那个评论只是看看我蒙的方法对不对,不好意思
wiki_Li 2019-07-06
  • 打赏
  • 举报
回复
# Copyright 2015 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Builds the CIFAR-10 network. Summary of available functions: # Compute input images and labels for training. If you would like to run # evaluations, use input() instead. inputs, labels = distorted_inputs() # Compute inference on the model inputs to make a prediction. predictions = inference(inputs) # Compute the total loss of the prediction with respect to the labels. loss = loss(predictions, labels) # Create a graph to run one step of training with respect to the loss. train_op = train(loss, global_step) """ # pylint: disable=missing-docstring from __future__ import absolute_import from __future__ import division from __future__ import print_function import gzip import os import re import sys import tarfile import tensorflow.python.platform from six.moves import urllib import tensorflow as tf import cifar10_input FLAGS = tf.app.flags.FLAGS # Basic model parameters. tf.app.flags.DEFINE_integer('batch_size', 128, """Number of images to process in a batch.""") tf.app.flags.DEFINE_string('data_dir', '/tmp/cifar10_data', """Path to the CIFAR-10 data directory.""") # Global constants describing the CIFAR-10 data set. IMAGE_SIZE = cifar10_input.IMAGE_SIZE NUM_CLASSES = cifar10_input.NUM_CLASSES NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = cifar10_input.NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = cifar10_input.NUM_EXAMPLES_PER_EPOCH_FOR_EVAL # Constants describing the training process. MOVING_AVERAGE_DECAY = 0.9999 # The decay to use for the moving average. NUM_EPOCHS_PER_DECAY = 350.0 # Epochs after which learning rate decays. LEARNING_RATE_DECAY_FACTOR = 0.1 # Learning rate decay factor. INITIAL_LEARNING_RATE = 0.1 # Initial learning rate. # If a model is trained with multiple GPU's prefix all Op names with tower_name # to differentiate the operations. Note that this prefix is removed from the # names of the summaries when visualizing a model. TOWER_NAME = 'tower' DATA_URL = 'http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz' def _activation_summary(x): """Helper to create summaries for activations. Creates a summary that provides a histogram of activations. Creates a summary that measure the sparsity of activations. Args: x: Tensor Returns: nothing """ # Remove 'tower_[0-9]/' from the name in case this is a multi-GPU training # session. This helps the clarity of presentation on tensorboard. tensor_name = re.sub('%s_[0-9]*/' % TOWER_NAME, '', x.op.name) tf.summary.histogram(tensor_name + '/activations', x) tf.summary.scalar(tensor_name + '/sparsity', tf.nn.zero_fraction(x)) def _variable_on_cpu(name, shape, initializer): """Helper to create a Variable stored on CPU memory. Args: name: name of the variable shape: list of ints initializer: initializer for Variable Returns: Variable Tensor """ with tf.device('/cpu:0'): var = tf.get_variable(name, shape, initializer=initializer) return var def _variable_with_weight_decay(name, shape, stddev, wd): """Helper to create an initialized Variable with weight decay. Note that the Variable is initialized with a truncated normal distribution. A weight decay is added only if one is specified. Args: name: name of the variable shape: list of ints stddev: standard deviation of a truncated Gaussian wd: add L2Loss weight decay multiplied by this float. If None, weight decay is not added for this Variable. Returns: Variable Tensor """ var = _variable_on_cpu(name, shape, tf.truncated_normal_initializer(stddev=stddev)) if wd: weight_decay = tf.multiply(tf.nn.l2_loss(var), wd, name='weight_loss') tf.add_to_collection('losses', weight_decay) return var def distorted_inputs(): """Construct distorted input for CIFAR training using the Reader ops. Returns: images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size. labels: Labels. 1D tensor of [batch_size] size. Raises: ValueError: If no data_dir """ if not FLAGS.data_dir: raise ValueError('Please supply a data_dir') data_dir = os.path.join(FLAGS.data_dir, 'cifar-10-batches-bin') return cifar10_input.distorted_inputs(data_dir=data_dir, batch_size=FLAGS.batch_size) def inputs(eval_data): """Construct input for CIFAR evaluation using the Reader ops. Args: eval_data: bool, indicating if one should use the train or eval data set. Returns: images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size. labels: Labels. 1D tensor of [batch_size] size. Raises: ValueError: If no data_dir """ if not FLAGS.data_dir: raise ValueError('Please supply a data_dir') data_dir = os.path.join(FLAGS.data_dir, 'cifar-10-batches-bin') return cifar10_input.inputs(eval_data=eval_data, data_dir=data_dir, batch_size=FLAGS.batch_size) def inference(images): """Build the CIFAR-10 model. Args: images: Images returned from distorted_inputs() or inputs(). Returns: Logits. """ # We instantiate all variables using tf.get_variable() instead of # tf.Variable() in order to share variables across multiple GPU training runs. # If we only ran this model on a single GPU, we could simplify this function # by replacing all instances of tf.get_variable() with tf.Variable(). # # conv1 with tf.variable_scope('conv1') as scope: kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64], stddev=1e-4, wd=0.0) conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0)) bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope.name) _activation_summary(conv1) # pool1 pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') # norm1 norm1 = tf.nn.lrn(pool1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1') # conv2 with tf.variable_scope('conv2') as scope: kernel = _variable_with_weight_decay('weights', shape=[5, 5, 64, 64], stddev=1e-4, wd=0.0) conv = tf.nn.conv2d(norm1, kernel, [1, 1, 1, 1], padding='SAME') biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.1)) bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope.name) _activation_summary(conv2) # norm2 norm2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm2') # pool2 pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2') # local3 with tf.variable_scope('local3') as scope: # Move everything into depth so we can perform a single matrix multiply. dim = 1 for d in pool2.get_shape()[1:].as_list(): dim *= d reshape = tf.reshape(pool2, [FLAGS.batch_size, dim]) weights = _variable_with_weight_decay('weights', shape=[dim, 384], stddev=0.04, wd=0.004) biases = _variable_on_cpu('biases', [384], tf.constant_initializer(0.1)) local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name) _activation_summary(local3) # local4 with tf.variable_scope('local4') as scope: weights = _variable_with_weight_decay('weights', shape=[384, 192], stddev=0.04, wd=0.004) biases = _variable_on_cpu('biases', [192], tf.constant_initializer(0.1)) local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name=scope.name) _activation_summary(local4) # softmax, i.e. softmax(WX + b) with tf.variable_scope('softmax_linear') as scope: weights = _variable_with_weight_decay('weights', [192, NUM_CLASSES], stddev=1/192.0, wd=0.0) biases = _variable_on_cpu('biases', [NUM_CLASSES], tf.constant_initializer(0.0)) softmax_linear = tf.add(tf.matmul(local4, weights), biases, name=scope.name) _activation_summary(softmax_linear) return softmax_linear def loss(logits, labels): """Add L2Loss to all the trainable variables. Add summary for for "Loss" and "Loss/avg". Args: logits: Logits from inference(). labels: Labels from distorted_inputs or inputs(). 1-D tensor of shape [batch_size] Returns: Loss tensor of type float. """ # Reshape the labels into a dense Tensor of # shape [batch_size, NUM_CLASSES]. sparse_labels = tf.reshape(labels, [FLAGS.batch_size, 1]) indices = tf.reshape(tf.range(FLAGS.batch_size), [FLAGS.batch_size, 1]) concated = tf.concat([indices, sparse_labels], 1) dense_labels = tf.sparse_to_dense(concated, [FLAGS.batch_size, NUM_CLASSES], 1.0, 0.0) # Calculate the average cross entropy loss across the batch. cross_entropy = tf.nn.softmax_cross_entropy_with_lo
ghostnj 2019-05-09
  • 打赏
  • 举报
回复
FF一直没解决。
换IE完美解决。
Ginray 2019-03-09
  • 打赏
  • 举报
回复
太秀了,解决兼容性的办法就是换个浏览器。
神米米 2019-01-18
  • 打赏
  • 举报
回复
这个问题还在啊 用火狐复制代码就有问题
awt_32 2018-12-05
  • 打赏
  • 举报
回复
一个月过去了, 看来CSDN 也早知道兼容问题, 就是懒得去解决...
awt_32 2018-11-20
  • 打赏
  • 举报
回复
引用 9 楼 huhf4 的回复:
[quote=引用 7 楼 awt_32 的回复:]
一个星期过去了,还没有改好


您好!
复制的代码,必须粘贴在代码片中,才会显示代码样式。[/quote]


那... 这个问题就算解决了 ?
flybirding10011 2018-11-20
  • 打赏
  • 举报
回复
引用 12 楼 awt_32 的回复:
[quote=引用 9 楼 huhf4 的回复:] [quote=引用 7 楼 awt_32 的回复:] 一个星期过去了,还没有改好
您好! 复制的代码,必须粘贴在代码片中,才会显示代码样式。[/quote] 那... 这个问题就算解决了 ?[/quote] 你好,这个问题根源在于不同浏览器的不兼容导致 目前您可以使用兼容性好的chrome浏览器来进行复制粘贴操作
CSDN客服-糊胡 2018-11-14
  • 打赏
  • 举报
回复
引用 7 楼 awt_32 的回复:
一个星期过去了,还没有改好
您好! 复制的代码,必须粘贴在代码片中,才会显示代码样式。
awt_32 2018-11-14
  • 打赏
  • 举报
回复
“eclipse快捷键——复制当前行到上一行或者下一行 ” 这个和当前帖子有任何关系么 ?
awt_32 2018-11-14
  • 打赏
  • 举报
回复
一个星期过去了,还没有改好
cpongo7 2018-11-14
  • 打赏
  • 举报
回复
引用 10 楼 qq_16762211 的回复:
亲测firefox不行,chrome可以。苦了用fox的兄弟


目前FF浏览器使用全选复制的方式粘贴代码片的确会出现这种问题,建议用户如果遇到此问题:
1.使用代码片顶部右侧的复制按钮来进行代码片的复制粘贴操作
2.如果无复制按钮(如一些未使用代码片工具的代码),可以更换浏览器至chrome
骑车去看海 2018-11-14
  • 打赏
  • 举报
回复
亲测firefox不行,chrome可以。苦了用fox的兄弟
我是CSDN客服 2018-11-09
  • 打赏
  • 举报
回复
该问题已经反馈给技术人员处理。
awt_32 2018-11-08
  • 打赏
  • 举报
回复
我chrome 和 IE 试了下, 复制出来的代码格式正常。 你们换个编辑器不测试兼容性的么 ????? 还有 ITEYE怎么不维护了?
awt_32 2018-11-08
  • 打赏
  • 举报
回复
格式问题 ? 什么格式问题 , 我想复制代码, 现在复制出来不能用, CSDN 怎么解决???? 就简单一句 格式问题 ? 谁的格式问题??
awt_32 2018-11-08
  • 打赏
  • 举报
回复
我火狐浏览器, 服了!!!
(☆随缘☆) 2018-11-08
  • 打赏
  • 举报
回复
1.代码成一行是因为格式的问题
2.版权声明如果不想要可以再博客设置中关闭版权声明。
加载更多回复(1)

655

社区成员

发帖
与我相关
我的任务
社区描述
提出问题
其他 技术论坛(原bbs)
社区管理员
  • community_281
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧